May 15 01:08:07.656501 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 14 23:14:51 -00 2025 May 15 01:08:07.656521 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 01:08:07.656531 kernel: Disabled fast string operations May 15 01:08:07.656538 kernel: BIOS-provided physical RAM map: May 15 01:08:07.656544 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 15 01:08:07.656551 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 15 01:08:07.656561 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 15 01:08:07.656568 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 15 01:08:07.656575 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 15 01:08:07.656582 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 15 01:08:07.656589 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 15 01:08:07.656596 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 15 01:08:07.656603 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 15 01:08:07.656610 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 15 01:08:07.656618 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 15 01:08:07.656624 kernel: NX (Execute Disable) protection: active May 15 01:08:07.656632 kernel: SMBIOS 2.7 present. May 15 01:08:07.656640 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 15 01:08:07.656647 kernel: vmware: hypercall mode: 0x00 May 15 01:08:07.656652 kernel: Hypervisor detected: VMware May 15 01:08:07.656658 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 15 01:08:07.656663 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 15 01:08:07.656671 kernel: vmware: using clock offset of 7724220809 ns May 15 01:08:07.656678 kernel: tsc: Detected 3408.000 MHz processor May 15 01:08:07.656686 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 01:08:07.656692 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 01:08:07.656697 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 15 01:08:07.656702 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 01:08:07.656709 kernel: total RAM covered: 3072M May 15 01:08:07.656719 kernel: Found optimal setting for mtrr clean up May 15 01:08:07.656725 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 15 01:08:07.656729 kernel: Using GB pages for direct mapping May 15 01:08:07.656743 kernel: ACPI: Early table checksum verification disabled May 15 01:08:07.656748 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 15 01:08:07.656753 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 15 01:08:07.656758 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 15 01:08:07.656763 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 15 01:08:07.656768 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 15 01:08:07.656773 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 15 01:08:07.656783 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 15 01:08:07.656795 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 15 01:08:07.656803 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 15 01:08:07.656810 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 15 01:08:07.656815 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 15 01:08:07.656822 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 15 01:08:07.656827 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 15 01:08:07.656832 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 15 01:08:07.656838 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 15 01:08:07.656843 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 15 01:08:07.656848 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 15 01:08:07.656853 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 15 01:08:07.656860 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 15 01:08:07.656866 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 15 01:08:07.656872 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 15 01:08:07.656878 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 15 01:08:07.656883 kernel: system APIC only can use physical flat May 15 01:08:07.656888 kernel: Setting APIC routing to physical flat. May 15 01:08:07.656896 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 15 01:08:07.656904 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 15 01:08:07.656913 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 15 01:08:07.656921 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 15 01:08:07.656930 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 15 01:08:07.656937 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 15 01:08:07.656943 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 15 01:08:07.656949 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 15 01:08:07.656958 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 15 01:08:07.656966 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 15 01:08:07.656974 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 15 01:08:07.656980 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 15 01:08:07.656986 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 15 01:08:07.656993 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 15 01:08:07.657001 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 15 01:08:07.657011 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 15 01:08:07.657020 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 15 01:08:07.657028 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 15 01:08:07.657036 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 15 01:08:07.657044 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 15 01:08:07.657051 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 15 01:08:07.657056 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 15 01:08:07.657061 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 15 01:08:07.657069 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 15 01:08:07.657075 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 15 01:08:07.657081 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 15 01:08:07.657086 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 15 01:08:07.657091 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 15 01:08:07.657096 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 15 01:08:07.657101 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 15 01:08:07.657109 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 15 01:08:07.657118 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 15 01:08:07.657126 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 15 01:08:07.657135 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 15 01:08:07.657143 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 15 01:08:07.657152 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 15 01:08:07.657161 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 15 01:08:07.657169 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 15 01:08:07.657178 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 15 01:08:07.657183 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 15 01:08:07.657188 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 15 01:08:07.657193 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 15 01:08:07.657198 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 15 01:08:07.657204 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 15 01:08:07.657209 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 15 01:08:07.657217 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 15 01:08:07.657225 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 15 01:08:07.657234 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 15 01:08:07.657242 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 15 01:08:07.657251 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 15 01:08:07.657259 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 15 01:08:07.657265 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 15 01:08:07.657270 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 15 01:08:07.657275 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 15 01:08:07.657282 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 15 01:08:07.657287 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 15 01:08:07.657292 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 15 01:08:07.657297 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 15 01:08:07.657303 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 15 01:08:07.657307 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 15 01:08:07.657313 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 15 01:08:07.657322 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 15 01:08:07.657329 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 15 01:08:07.657334 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 15 01:08:07.657339 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 15 01:08:07.657350 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 15 01:08:07.657356 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 15 01:08:07.657362 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 15 01:08:07.657367 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 15 01:08:07.657374 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 15 01:08:07.657379 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 15 01:08:07.657385 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 15 01:08:07.657392 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 15 01:08:07.657400 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 15 01:08:07.657409 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 15 01:08:07.657415 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 15 01:08:07.657421 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 15 01:08:07.657430 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 15 01:08:07.657439 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 15 01:08:07.657448 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 15 01:08:07.657456 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 15 01:08:07.657461 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 15 01:08:07.657468 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 15 01:08:07.657474 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 15 01:08:07.657479 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 15 01:08:07.657485 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 15 01:08:07.657491 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 15 01:08:07.657499 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 15 01:08:07.657504 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 15 01:08:07.657510 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 15 01:08:07.657515 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 15 01:08:07.657520 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 15 01:08:07.657527 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 15 01:08:07.657536 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 15 01:08:07.657545 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 15 01:08:07.657552 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 15 01:08:07.657558 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 15 01:08:07.657563 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 15 01:08:07.657569 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 15 01:08:07.657574 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 15 01:08:07.657579 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 15 01:08:07.657585 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 15 01:08:07.657595 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 15 01:08:07.657604 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 15 01:08:07.657611 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 15 01:08:07.657617 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 15 01:08:07.657622 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 15 01:08:07.657628 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 15 01:08:07.657633 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 15 01:08:07.657639 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 15 01:08:07.657648 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 15 01:08:07.657657 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 15 01:08:07.657666 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 15 01:08:07.657671 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 15 01:08:07.657677 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 15 01:08:07.657683 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 15 01:08:07.657692 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 15 01:08:07.657701 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 15 01:08:07.657709 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 15 01:08:07.657715 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 15 01:08:07.657721 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 15 01:08:07.657726 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 15 01:08:07.657741 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 15 01:08:07.657747 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 15 01:08:07.657752 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 15 01:08:07.657759 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 15 01:08:07.657768 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 15 01:08:07.657777 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 15 01:08:07.657783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 01:08:07.657788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 15 01:08:07.657794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 15 01:08:07.657801 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 15 01:08:07.657807 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 15 01:08:07.657813 kernel: Zone ranges: May 15 01:08:07.657822 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 01:08:07.657831 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 15 01:08:07.657840 kernel: Normal empty May 15 01:08:07.657849 kernel: Movable zone start for each node May 15 01:08:07.657858 kernel: Early memory node ranges May 15 01:08:07.657865 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 15 01:08:07.657872 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 15 01:08:07.657878 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 15 01:08:07.657884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 15 01:08:07.657889 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 01:08:07.657895 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 15 01:08:07.657900 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 15 01:08:07.657906 kernel: ACPI: PM-Timer IO Port: 0x1008 May 15 01:08:07.657911 kernel: system APIC only can use physical flat May 15 01:08:07.657917 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 15 01:08:07.657923 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 15 01:08:07.657929 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 15 01:08:07.657935 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 15 01:08:07.657940 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 15 01:08:07.657946 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 15 01:08:07.657951 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 15 01:08:07.657957 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 15 01:08:07.657962 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 15 01:08:07.657968 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 15 01:08:07.657973 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 15 01:08:07.657980 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 15 01:08:07.657985 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 15 01:08:07.657991 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 15 01:08:07.657996 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 15 01:08:07.658002 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 15 01:08:07.658007 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 15 01:08:07.658013 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 15 01:08:07.658018 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 15 01:08:07.658024 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 15 01:08:07.658032 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 15 01:08:07.658042 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 15 01:08:07.658048 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 15 01:08:07.658054 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 15 01:08:07.658061 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 15 01:08:07.658070 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 15 01:08:07.658079 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 15 01:08:07.658088 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 15 01:08:07.658097 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 15 01:08:07.658106 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 15 01:08:07.658117 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 15 01:08:07.658127 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 15 01:08:07.658135 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 15 01:08:07.658141 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 15 01:08:07.658146 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 15 01:08:07.658152 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 15 01:08:07.658157 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 15 01:08:07.658163 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 15 01:08:07.658168 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 15 01:08:07.658175 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 15 01:08:07.658186 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 15 01:08:07.658195 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 15 01:08:07.658204 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 15 01:08:07.658213 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 15 01:08:07.658222 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 15 01:08:07.658231 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 15 01:08:07.658239 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 15 01:08:07.658245 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 15 01:08:07.658250 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 15 01:08:07.658257 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 15 01:08:07.658263 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 15 01:08:07.658268 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 15 01:08:07.658274 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 15 01:08:07.658279 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 15 01:08:07.658285 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 15 01:08:07.658294 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 15 01:08:07.658303 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 15 01:08:07.658310 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 15 01:08:07.658317 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 15 01:08:07.658323 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 15 01:08:07.658328 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 15 01:08:07.658334 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 15 01:08:07.658339 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 15 01:08:07.658353 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 15 01:08:07.658362 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 15 01:08:07.658370 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 15 01:08:07.658376 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 15 01:08:07.658381 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 15 01:08:07.658389 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 15 01:08:07.658397 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 15 01:08:07.658406 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 15 01:08:07.658412 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 15 01:08:07.658417 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 15 01:08:07.658423 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 15 01:08:07.658429 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 15 01:08:07.658438 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 15 01:08:07.658448 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 15 01:08:07.658459 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 15 01:08:07.658465 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 15 01:08:07.658471 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 15 01:08:07.658477 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 15 01:08:07.658482 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 15 01:08:07.664780 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 15 01:08:07.664790 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 15 01:08:07.664796 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 15 01:08:07.664802 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 15 01:08:07.664808 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 15 01:08:07.664817 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 15 01:08:07.664822 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 15 01:08:07.664828 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 15 01:08:07.664833 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 15 01:08:07.664839 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 15 01:08:07.664844 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 15 01:08:07.664850 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 15 01:08:07.664855 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 15 01:08:07.664861 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 15 01:08:07.664868 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 15 01:08:07.664873 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 15 01:08:07.664879 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 15 01:08:07.664885 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 15 01:08:07.664890 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 15 01:08:07.664896 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 15 01:08:07.664901 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 15 01:08:07.664907 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 15 01:08:07.664912 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 15 01:08:07.664919 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 15 01:08:07.664925 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 15 01:08:07.664930 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 15 01:08:07.664936 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 15 01:08:07.664941 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 15 01:08:07.664947 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 15 01:08:07.664952 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 15 01:08:07.664958 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 15 01:08:07.664963 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 15 01:08:07.664969 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 15 01:08:07.664976 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 15 01:08:07.664981 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 15 01:08:07.664987 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 15 01:08:07.664992 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 15 01:08:07.664997 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 15 01:08:07.665003 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 15 01:08:07.665009 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 15 01:08:07.665016 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 15 01:08:07.665022 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 15 01:08:07.665029 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 15 01:08:07.665034 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 15 01:08:07.665040 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 15 01:08:07.665045 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 15 01:08:07.665051 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 15 01:08:07.665057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 15 01:08:07.665066 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 01:08:07.665075 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 15 01:08:07.665084 kernel: TSC deadline timer available May 15 01:08:07.665095 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 15 01:08:07.665104 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 15 01:08:07.665111 kernel: Booting paravirtualized kernel on VMware hypervisor May 15 01:08:07.665117 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 01:08:07.665122 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 May 15 01:08:07.665128 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 15 01:08:07.665135 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 15 01:08:07.665144 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 15 01:08:07.665152 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 15 01:08:07.665159 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 15 01:08:07.665164 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 15 01:08:07.665170 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 15 01:08:07.665175 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 15 01:08:07.665181 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 15 01:08:07.665193 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 15 01:08:07.665200 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 15 01:08:07.665206 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 15 01:08:07.665212 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 15 01:08:07.665219 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 15 01:08:07.665225 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 15 01:08:07.665231 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 15 01:08:07.665237 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 15 01:08:07.665243 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 15 01:08:07.665249 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 15 01:08:07.665254 kernel: Policy zone: DMA32 May 15 01:08:07.665261 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 01:08:07.665269 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 01:08:07.665275 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 15 01:08:07.665281 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 15 01:08:07.665287 kernel: printk: log_buf_len min size: 262144 bytes May 15 01:08:07.665294 kernel: printk: log_buf_len: 1048576 bytes May 15 01:08:07.665300 kernel: printk: early log buf free: 239728(91%) May 15 01:08:07.665305 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 01:08:07.665311 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 01:08:07.665318 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 01:08:07.665325 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 155976K reserved, 0K cma-reserved) May 15 01:08:07.665331 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 15 01:08:07.665337 kernel: ftrace: allocating 34584 entries in 136 pages May 15 01:08:07.665343 kernel: ftrace: allocated 136 pages with 2 groups May 15 01:08:07.665350 kernel: rcu: Hierarchical RCU implementation. May 15 01:08:07.665358 kernel: rcu: RCU event tracing is enabled. May 15 01:08:07.665364 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 15 01:08:07.665370 kernel: Rude variant of Tasks RCU enabled. May 15 01:08:07.665376 kernel: Tracing variant of Tasks RCU enabled. May 15 01:08:07.665382 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 01:08:07.665388 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 15 01:08:07.665394 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 15 01:08:07.665400 kernel: random: crng init done May 15 01:08:07.665405 kernel: Console: colour VGA+ 80x25 May 15 01:08:07.665415 kernel: printk: console [tty0] enabled May 15 01:08:07.665425 kernel: printk: console [ttyS0] enabled May 15 01:08:07.665435 kernel: ACPI: Core revision 20210730 May 15 01:08:07.665443 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 15 01:08:07.665449 kernel: APIC: Switch to symmetric I/O mode setup May 15 01:08:07.665455 kernel: x2apic enabled May 15 01:08:07.665461 kernel: Switched APIC routing to physical x2apic. May 15 01:08:07.665467 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 01:08:07.665473 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 15 01:08:07.665479 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 15 01:08:07.665487 kernel: Disabled fast string operations May 15 01:08:07.665493 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 15 01:08:07.665499 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 15 01:08:07.665505 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 01:08:07.665511 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 15 01:08:07.665517 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 15 01:08:07.665525 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 15 01:08:07.665535 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 15 01:08:07.665546 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 15 01:08:07.665556 kernel: RETBleed: Mitigation: Enhanced IBRS May 15 01:08:07.665566 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 01:08:07.665572 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 01:08:07.665579 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 01:08:07.665585 kernel: SRBDS: Unknown: Dependent on hypervisor status May 15 01:08:07.665591 kernel: GDS: Unknown: Dependent on hypervisor status May 15 01:08:07.665597 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 01:08:07.665606 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 01:08:07.665616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 01:08:07.665622 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 01:08:07.665628 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 01:08:07.665634 kernel: Freeing SMP alternatives memory: 32K May 15 01:08:07.665640 kernel: pid_max: default: 131072 minimum: 1024 May 15 01:08:07.665646 kernel: LSM: Security Framework initializing May 15 01:08:07.665652 kernel: SELinux: Initializing. May 15 01:08:07.665658 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 01:08:07.665664 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 15 01:08:07.665672 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 15 01:08:07.665678 kernel: Performance Events: Skylake events, core PMU driver. May 15 01:08:07.665684 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 15 01:08:07.665690 kernel: core: CPUID marked event: 'instructions' unavailable May 15 01:08:07.665697 kernel: core: CPUID marked event: 'bus cycles' unavailable May 15 01:08:07.665702 kernel: core: CPUID marked event: 'cache references' unavailable May 15 01:08:07.665708 kernel: core: CPUID marked event: 'cache misses' unavailable May 15 01:08:07.665714 kernel: core: CPUID marked event: 'branch instructions' unavailable May 15 01:08:07.665721 kernel: core: CPUID marked event: 'branch misses' unavailable May 15 01:08:07.665726 kernel: ... version: 1 May 15 01:08:07.665739 kernel: ... bit width: 48 May 15 01:08:07.665745 kernel: ... generic registers: 4 May 15 01:08:07.665751 kernel: ... value mask: 0000ffffffffffff May 15 01:08:07.665757 kernel: ... max period: 000000007fffffff May 15 01:08:07.665763 kernel: ... fixed-purpose events: 0 May 15 01:08:07.665769 kernel: ... event mask: 000000000000000f May 15 01:08:07.665775 kernel: signal: max sigframe size: 1776 May 15 01:08:07.665781 kernel: rcu: Hierarchical SRCU implementation. May 15 01:08:07.665788 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 01:08:07.665794 kernel: smp: Bringing up secondary CPUs ... May 15 01:08:07.665801 kernel: x86: Booting SMP configuration: May 15 01:08:07.665806 kernel: .... node #0, CPUs: #1 May 15 01:08:07.665813 kernel: Disabled fast string operations May 15 01:08:07.665818 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 15 01:08:07.665824 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 15 01:08:07.665830 kernel: smp: Brought up 1 node, 2 CPUs May 15 01:08:07.665836 kernel: smpboot: Max logical packages: 128 May 15 01:08:07.665842 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 15 01:08:07.665849 kernel: devtmpfs: initialized May 15 01:08:07.665855 kernel: x86/mm: Memory block size: 128MB May 15 01:08:07.665861 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 15 01:08:07.665867 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 01:08:07.665873 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 15 01:08:07.665879 kernel: pinctrl core: initialized pinctrl subsystem May 15 01:08:07.665885 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 01:08:07.665890 kernel: audit: initializing netlink subsys (disabled) May 15 01:08:07.665897 kernel: audit: type=2000 audit(1747271286.059:1): state=initialized audit_enabled=0 res=1 May 15 01:08:07.665903 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 01:08:07.665909 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 01:08:07.665916 kernel: cpuidle: using governor menu May 15 01:08:07.665926 kernel: Simple Boot Flag at 0x36 set to 0x80 May 15 01:08:07.665937 kernel: ACPI: bus type PCI registered May 15 01:08:07.665946 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 01:08:07.665953 kernel: dca service started, version 1.12.1 May 15 01:08:07.665959 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 15 01:08:07.665965 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 May 15 01:08:07.665972 kernel: PCI: Using configuration type 1 for base access May 15 01:08:07.665978 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 01:08:07.665984 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 01:08:07.665992 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 01:08:07.666001 kernel: ACPI: Added _OSI(Module Device) May 15 01:08:07.666011 kernel: ACPI: Added _OSI(Processor Device) May 15 01:08:07.666021 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 01:08:07.666031 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 01:08:07.666038 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 01:08:07.666045 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 01:08:07.666051 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 01:08:07.666057 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 01:08:07.666063 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 15 01:08:07.666071 kernel: ACPI: Interpreter enabled May 15 01:08:07.666081 kernel: ACPI: PM: (supports S0 S1 S5) May 15 01:08:07.666087 kernel: ACPI: Using IOAPIC for interrupt routing May 15 01:08:07.666093 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 01:08:07.666101 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 15 01:08:07.666107 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 15 01:08:07.666203 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 01:08:07.666270 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 15 01:08:07.666318 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 15 01:08:07.666326 kernel: PCI host bridge to bus 0000:00 May 15 01:08:07.666392 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 01:08:07.666440 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 15 01:08:07.666490 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 01:08:07.666556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 01:08:07.666599 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 15 01:08:07.666640 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 15 01:08:07.666694 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 15 01:08:07.670785 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 15 01:08:07.670860 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 15 01:08:07.670919 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 15 01:08:07.670968 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 15 01:08:07.671017 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 15 01:08:07.671065 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 15 01:08:07.671112 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 15 01:08:07.671161 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 15 01:08:07.671212 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 15 01:08:07.671259 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 15 01:08:07.671306 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 15 01:08:07.671359 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 15 01:08:07.671407 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 15 01:08:07.671456 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 15 01:08:07.671506 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 15 01:08:07.671553 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 15 01:08:07.671600 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 15 01:08:07.671645 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 15 01:08:07.671691 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 15 01:08:07.671748 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 01:08:07.671805 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 15 01:08:07.671857 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.671905 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 15 01:08:07.671956 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672007 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 15 01:08:07.672058 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672108 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 15 01:08:07.672160 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672208 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 15 01:08:07.672258 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672306 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 15 01:08:07.672355 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672402 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 15 01:08:07.672455 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672502 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 15 01:08:07.672553 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672600 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 15 01:08:07.672653 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672711 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 15 01:08:07.672798 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672847 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 15 01:08:07.672897 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.672944 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 15 01:08:07.672993 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673044 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 15 01:08:07.673095 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673142 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 15 01:08:07.673193 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673240 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 15 01:08:07.673289 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673350 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 15 01:08:07.673400 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673447 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 15 01:08:07.673496 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673543 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 15 01:08:07.673592 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.673639 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 15 01:08:07.673691 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.675775 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 15 01:08:07.675841 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.675894 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 15 01:08:07.675948 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.675997 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 15 01:08:07.676050 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676097 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 15 01:08:07.676148 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676195 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 15 01:08:07.676244 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676290 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 15 01:08:07.676342 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676401 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 15 01:08:07.676454 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676500 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 15 01:08:07.676549 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676596 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 15 01:08:07.676647 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676694 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 15 01:08:07.676756 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676806 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 15 01:08:07.676855 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676901 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 15 01:08:07.676950 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.676999 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 15 01:08:07.677049 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 15 01:08:07.677096 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 15 01:08:07.677144 kernel: pci_bus 0000:01: extended config space not accessible May 15 01:08:07.677193 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 15 01:08:07.677242 kernel: pci_bus 0000:02: extended config space not accessible May 15 01:08:07.677252 kernel: acpiphp: Slot [32] registered May 15 01:08:07.677259 kernel: acpiphp: Slot [33] registered May 15 01:08:07.677265 kernel: acpiphp: Slot [34] registered May 15 01:08:07.677271 kernel: acpiphp: Slot [35] registered May 15 01:08:07.677277 kernel: acpiphp: Slot [36] registered May 15 01:08:07.677283 kernel: acpiphp: Slot [37] registered May 15 01:08:07.677289 kernel: acpiphp: Slot [38] registered May 15 01:08:07.677295 kernel: acpiphp: Slot [39] registered May 15 01:08:07.677301 kernel: acpiphp: Slot [40] registered May 15 01:08:07.677308 kernel: acpiphp: Slot [41] registered May 15 01:08:07.677314 kernel: acpiphp: Slot [42] registered May 15 01:08:07.677320 kernel: acpiphp: Slot [43] registered May 15 01:08:07.677326 kernel: acpiphp: Slot [44] registered May 15 01:08:07.677332 kernel: acpiphp: Slot [45] registered May 15 01:08:07.677338 kernel: acpiphp: Slot [46] registered May 15 01:08:07.677344 kernel: acpiphp: Slot [47] registered May 15 01:08:07.677353 kernel: acpiphp: Slot [48] registered May 15 01:08:07.677359 kernel: acpiphp: Slot [49] registered May 15 01:08:07.677365 kernel: acpiphp: Slot [50] registered May 15 01:08:07.677372 kernel: acpiphp: Slot [51] registered May 15 01:08:07.677378 kernel: acpiphp: Slot [52] registered May 15 01:08:07.677384 kernel: acpiphp: Slot [53] registered May 15 01:08:07.677390 kernel: acpiphp: Slot [54] registered May 15 01:08:07.677396 kernel: acpiphp: Slot [55] registered May 15 01:08:07.677402 kernel: acpiphp: Slot [56] registered May 15 01:08:07.677408 kernel: acpiphp: Slot [57] registered May 15 01:08:07.677414 kernel: acpiphp: Slot [58] registered May 15 01:08:07.677420 kernel: acpiphp: Slot [59] registered May 15 01:08:07.677427 kernel: acpiphp: Slot [60] registered May 15 01:08:07.677433 kernel: acpiphp: Slot [61] registered May 15 01:08:07.677440 kernel: acpiphp: Slot [62] registered May 15 01:08:07.677449 kernel: acpiphp: Slot [63] registered May 15 01:08:07.677521 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 15 01:08:07.677577 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 15 01:08:07.677624 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 15 01:08:07.677670 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 15 01:08:07.677716 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 15 01:08:07.677778 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 15 01:08:07.677825 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 15 01:08:07.677872 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 15 01:08:07.677918 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 15 01:08:07.677972 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 15 01:08:07.678021 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 15 01:08:07.678071 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 15 01:08:07.678118 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 15 01:08:07.678165 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 15 01:08:07.678213 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 15 01:08:07.678261 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 15 01:08:07.678307 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 15 01:08:07.678362 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 15 01:08:07.678411 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 15 01:08:07.678460 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 15 01:08:07.678507 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 15 01:08:07.678567 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 15 01:08:07.678618 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 15 01:08:07.678664 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 15 01:08:07.678711 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 15 01:08:07.678769 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 15 01:08:07.678826 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 15 01:08:07.678881 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 15 01:08:07.678930 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 15 01:08:07.678978 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 15 01:08:07.679025 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 15 01:08:07.679072 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 15 01:08:07.679121 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 15 01:08:07.679166 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 15 01:08:07.679212 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 15 01:08:07.679259 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 15 01:08:07.679306 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 15 01:08:07.679352 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 15 01:08:07.679399 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 15 01:08:07.679447 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 15 01:08:07.679494 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 15 01:08:07.679548 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 15 01:08:07.679596 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 15 01:08:07.679645 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 15 01:08:07.679694 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 15 01:08:07.679748 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 15 01:08:07.679801 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 15 01:08:07.679849 kernel: pci 0000:0b:00.0: supports D1 D2 May 15 01:08:07.679898 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 15 01:08:07.679954 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 15 01:08:07.680004 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 15 01:08:07.680051 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 15 01:08:07.680098 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 15 01:08:07.680146 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 15 01:08:07.680195 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 15 01:08:07.680242 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 15 01:08:07.680290 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 15 01:08:07.680337 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 15 01:08:07.680384 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 15 01:08:07.680429 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 15 01:08:07.680476 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 15 01:08:07.680522 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 15 01:08:07.680572 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 15 01:08:07.680617 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 15 01:08:07.680665 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 15 01:08:07.680711 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 15 01:08:07.680766 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 15 01:08:07.680814 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 15 01:08:07.680861 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 15 01:08:07.680908 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 15 01:08:07.680956 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 15 01:08:07.681003 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 15 01:08:07.681049 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 15 01:08:07.681096 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 15 01:08:07.681143 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 15 01:08:07.681189 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 15 01:08:07.681236 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 15 01:08:07.685810 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 15 01:08:07.685878 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 15 01:08:07.685933 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 15 01:08:07.685986 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 15 01:08:07.686038 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 15 01:08:07.686088 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 15 01:08:07.686139 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 15 01:08:07.686193 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 15 01:08:07.686245 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 15 01:08:07.686296 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 15 01:08:07.686345 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 15 01:08:07.686396 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 15 01:08:07.686446 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 15 01:08:07.686496 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 15 01:08:07.686548 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 15 01:08:07.686599 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 15 01:08:07.686651 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 15 01:08:07.686704 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 15 01:08:07.686763 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 15 01:08:07.686814 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 15 01:08:07.686868 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 15 01:08:07.686918 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 15 01:08:07.686968 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 15 01:08:07.687020 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 15 01:08:07.687073 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 15 01:08:07.687123 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 15 01:08:07.687176 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 15 01:08:07.687225 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 15 01:08:07.687276 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 15 01:08:07.687326 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 15 01:08:07.687379 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 15 01:08:07.687428 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 15 01:08:07.687481 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 15 01:08:07.687531 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 15 01:08:07.687583 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 15 01:08:07.687634 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 15 01:08:07.687685 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 15 01:08:07.687747 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 15 01:08:07.687802 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 15 01:08:07.687856 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 15 01:08:07.687908 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 15 01:08:07.687959 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 15 01:08:07.688008 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 15 01:08:07.688061 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 15 01:08:07.688112 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 15 01:08:07.688162 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 15 01:08:07.688213 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 15 01:08:07.688264 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 15 01:08:07.688315 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 15 01:08:07.688372 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 15 01:08:07.688423 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 15 01:08:07.688473 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 15 01:08:07.688481 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 15 01:08:07.688488 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 15 01:08:07.688494 kernel: ACPI: PCI: Interrupt link LNKB disabled May 15 01:08:07.688500 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 01:08:07.688508 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 15 01:08:07.688514 kernel: iommu: Default domain type: Translated May 15 01:08:07.688520 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 01:08:07.688572 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 15 01:08:07.688622 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 01:08:07.688672 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 15 01:08:07.688680 kernel: vgaarb: loaded May 15 01:08:07.688686 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 01:08:07.688692 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 01:08:07.688700 kernel: PTP clock support registered May 15 01:08:07.688706 kernel: PCI: Using ACPI for IRQ routing May 15 01:08:07.688712 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 01:08:07.688718 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 15 01:08:07.688724 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 15 01:08:07.688730 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 15 01:08:07.688748 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 15 01:08:07.688754 kernel: clocksource: Switched to clocksource tsc-early May 15 01:08:07.688760 kernel: VFS: Disk quotas dquot_6.6.0 May 15 01:08:07.688767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 01:08:07.688773 kernel: pnp: PnP ACPI init May 15 01:08:07.688831 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 15 01:08:07.688879 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 15 01:08:07.688925 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 15 01:08:07.688977 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 15 01:08:07.689027 kernel: pnp 00:06: [dma 2] May 15 01:08:07.689079 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 15 01:08:07.689127 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 15 01:08:07.689172 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 15 01:08:07.689180 kernel: pnp: PnP ACPI: found 8 devices May 15 01:08:07.689186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 01:08:07.689193 kernel: NET: Registered PF_INET protocol family May 15 01:08:07.689199 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 01:08:07.689207 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 15 01:08:07.689213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 01:08:07.689219 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 01:08:07.689225 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 15 01:08:07.689231 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 15 01:08:07.689237 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 01:08:07.689243 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 15 01:08:07.689249 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 01:08:07.689255 kernel: NET: Registered PF_XDP protocol family May 15 01:08:07.689310 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 01:08:07.689362 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 15 01:08:07.689414 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 15 01:08:07.689466 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 15 01:08:07.689518 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 15 01:08:07.689571 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 15 01:08:07.689644 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 15 01:08:07.689721 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 15 01:08:07.689814 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 15 01:08:07.689869 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 15 01:08:07.689921 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 15 01:08:07.689974 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 15 01:08:07.690028 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 15 01:08:07.690080 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 15 01:08:07.690131 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 15 01:08:07.690183 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 15 01:08:07.690234 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 15 01:08:07.690285 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 15 01:08:07.690339 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 15 01:08:07.690391 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 15 01:08:07.690442 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 15 01:08:07.690493 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 15 01:08:07.690545 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 15 01:08:07.690598 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 15 01:08:07.690650 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 15 01:08:07.690700 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.690807 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.690862 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.690913 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.690965 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691015 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691069 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691119 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691169 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691221 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691272 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691322 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691377 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691427 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691480 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691531 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691583 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691634 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691685 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691741 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691794 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.691918 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.691980 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692031 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692082 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692133 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692184 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692235 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692286 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692337 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692387 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692440 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692489 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692540 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692590 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.692641 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.692691 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.698475 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.698540 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.698597 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.698646 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.698694 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.698759 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.698809 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.698855 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.698901 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.698949 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.698998 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699045 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699092 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699138 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699184 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699231 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699277 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699325 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699371 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699418 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699467 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699514 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699561 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699607 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.699652 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.699699 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704213 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704274 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704324 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704387 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704436 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704483 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704529 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704577 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704623 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704670 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704716 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704777 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704831 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.704935 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 15 01:08:07.704990 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.705043 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 15 01:08:07.705093 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.705145 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 15 01:08:07.705195 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.705247 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 15 01:08:07.705296 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 15 01:08:07.705349 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 15 01:08:07.705404 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 15 01:08:07.705454 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 15 01:08:07.705504 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 15 01:08:07.705553 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 15 01:08:07.705609 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 15 01:08:07.705661 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 15 01:08:07.705712 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 15 01:08:07.707836 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 15 01:08:07.707895 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 15 01:08:07.708773 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 15 01:08:07.708831 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 15 01:08:07.708881 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 15 01:08:07.708930 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 15 01:08:07.708979 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 15 01:08:07.709026 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 15 01:08:07.709071 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 15 01:08:07.709118 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 15 01:08:07.709164 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 15 01:08:07.709214 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 15 01:08:07.709260 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 15 01:08:07.709307 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 15 01:08:07.709354 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 15 01:08:07.709400 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 15 01:08:07.709449 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 15 01:08:07.709497 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 15 01:08:07.709543 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 15 01:08:07.709589 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 15 01:08:07.709635 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 15 01:08:07.709680 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 15 01:08:07.709727 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 15 01:08:07.710797 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 15 01:08:07.710846 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 15 01:08:07.710896 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 15 01:08:07.710947 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 15 01:08:07.710993 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 15 01:08:07.711039 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 15 01:08:07.711084 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 15 01:08:07.711132 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 15 01:08:07.711178 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 15 01:08:07.711223 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 15 01:08:07.711268 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 15 01:08:07.711315 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 15 01:08:07.711366 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 15 01:08:07.711412 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 15 01:08:07.711458 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 15 01:08:07.711503 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 15 01:08:07.711549 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 15 01:08:07.711594 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 15 01:08:07.711640 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 15 01:08:07.711685 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 15 01:08:07.714073 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 15 01:08:07.714142 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 15 01:08:07.714817 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 15 01:08:07.714868 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 15 01:08:07.714918 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 15 01:08:07.714965 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 15 01:08:07.715011 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 15 01:08:07.715059 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 15 01:08:07.715106 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 15 01:08:07.715151 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 15 01:08:07.715199 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 15 01:08:07.715248 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 15 01:08:07.715294 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 15 01:08:07.715340 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 15 01:08:07.715386 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 15 01:08:07.715432 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 15 01:08:07.715477 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 15 01:08:07.715522 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 15 01:08:07.715569 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 15 01:08:07.715615 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 15 01:08:07.715661 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 15 01:08:07.715710 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 15 01:08:07.717203 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 15 01:08:07.717260 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 15 01:08:07.717311 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 15 01:08:07.717646 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 15 01:08:07.717701 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 15 01:08:07.717953 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 15 01:08:07.718008 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 15 01:08:07.718057 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 15 01:08:07.718108 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 15 01:08:07.718156 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 15 01:08:07.718201 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 15 01:08:07.718247 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 15 01:08:07.718294 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 15 01:08:07.719786 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 15 01:08:07.720132 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 15 01:08:07.720191 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 15 01:08:07.720242 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 15 01:08:07.720306 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 15 01:08:07.720358 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 15 01:08:07.720407 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 15 01:08:07.720454 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 15 01:08:07.720500 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 15 01:08:07.720546 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 15 01:08:07.720593 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 15 01:08:07.720640 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 15 01:08:07.720686 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 15 01:08:07.721752 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 15 01:08:07.721818 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 15 01:08:07.721868 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 15 01:08:07.721918 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 15 01:08:07.721966 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 15 01:08:07.722012 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 15 01:08:07.722058 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 15 01:08:07.722104 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 15 01:08:07.722151 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 15 01:08:07.722198 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 15 01:08:07.722244 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 15 01:08:07.722292 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 15 01:08:07.722339 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 15 01:08:07.722384 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 15 01:08:07.722431 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 15 01:08:07.722479 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 15 01:08:07.722520 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 15 01:08:07.722562 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 15 01:08:07.722603 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 15 01:08:07.722645 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 15 01:08:07.722690 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 15 01:08:07.723249 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 15 01:08:07.723307 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 15 01:08:07.723353 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 15 01:08:07.723397 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 15 01:08:07.724040 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 15 01:08:07.724093 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 15 01:08:07.724138 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 15 01:08:07.724764 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 15 01:08:07.724819 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 15 01:08:07.724864 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 15 01:08:07.724918 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 15 01:08:07.724963 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 15 01:08:07.725009 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 15 01:08:07.725056 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 15 01:08:07.725099 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 15 01:08:07.725154 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 15 01:08:07.725202 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 15 01:08:07.725246 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 15 01:08:07.725293 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 15 01:08:07.725339 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 15 01:08:07.725385 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 15 01:08:07.725428 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 15 01:08:07.725475 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 15 01:08:07.725518 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 15 01:08:07.725567 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 15 01:08:07.725613 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 15 01:08:07.725659 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 15 01:08:07.725702 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 15 01:08:07.727723 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 15 01:08:07.727793 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 15 01:08:07.727840 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 15 01:08:07.727889 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 15 01:08:07.727945 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 15 01:08:07.727992 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 15 01:08:07.728035 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 15 01:08:07.728084 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 15 01:08:07.728129 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 15 01:08:07.728176 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 15 01:08:07.728223 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 15 01:08:07.728273 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 15 01:08:07.728317 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 15 01:08:07.728370 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 15 01:08:07.728415 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 15 01:08:07.728462 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 15 01:08:07.728511 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 15 01:08:07.728558 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 15 01:08:07.728603 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 15 01:08:07.728646 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 15 01:08:07.728694 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 15 01:08:07.728751 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 15 01:08:07.728800 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 15 01:08:07.728847 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 15 01:08:07.728892 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 15 01:08:07.728936 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 15 01:08:07.728986 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 15 01:08:07.729031 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 15 01:08:07.729078 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 15 01:08:07.729127 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 15 01:08:07.729206 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 15 01:08:07.729277 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 15 01:08:07.729350 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 15 01:08:07.729406 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 15 01:08:07.729454 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 15 01:08:07.729501 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 15 01:08:07.729551 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 15 01:08:07.729595 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 15 01:08:07.729639 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 15 01:08:07.729686 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 15 01:08:07.729730 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 15 01:08:07.729807 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 15 01:08:07.729860 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 15 01:08:07.729903 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 15 01:08:07.729953 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 15 01:08:07.729996 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 15 01:08:07.730043 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 15 01:08:07.730089 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 15 01:08:07.730136 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 15 01:08:07.730180 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 15 01:08:07.730227 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 15 01:08:07.730270 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 15 01:08:07.730317 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 15 01:08:07.730360 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 15 01:08:07.730416 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 01:08:07.730425 kernel: PCI: CLS 32 bytes, default 64 May 15 01:08:07.730433 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 01:08:07.730439 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 15 01:08:07.730446 kernel: clocksource: Switched to clocksource tsc May 15 01:08:07.730453 kernel: Initialise system trusted keyrings May 15 01:08:07.730459 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 15 01:08:07.730466 kernel: Key type asymmetric registered May 15 01:08:07.730474 kernel: Asymmetric key parser 'x509' registered May 15 01:08:07.730480 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 01:08:07.730487 kernel: io scheduler mq-deadline registered May 15 01:08:07.730494 kernel: io scheduler kyber registered May 15 01:08:07.730500 kernel: io scheduler bfq registered May 15 01:08:07.730551 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 15 01:08:07.730600 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.730648 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 15 01:08:07.730698 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.730754 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 15 01:08:07.730802 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.730851 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 15 01:08:07.730898 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.730947 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 15 01:08:07.730994 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731046 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 15 01:08:07.731093 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731141 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 15 01:08:07.731188 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731236 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 15 01:08:07.731285 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731333 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 15 01:08:07.731385 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731432 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 15 01:08:07.731479 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731527 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 15 01:08:07.731574 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731625 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 15 01:08:07.731673 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731721 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 15 01:08:07.731776 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731824 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 15 01:08:07.731875 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.731923 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 15 01:08:07.731970 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732020 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 15 01:08:07.732067 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732117 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 15 01:08:07.732165 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732212 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 15 01:08:07.732259 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732307 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 15 01:08:07.732354 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732401 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 15 01:08:07.732462 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732511 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 15 01:08:07.732557 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.732606 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 15 01:08:07.732653 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.733137 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 15 01:08:07.733198 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.733250 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 15 01:08:07.733298 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.733347 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 15 01:08:07.733394 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.733445 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 15 01:08:07.733492 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.733541 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 15 01:08:07.733893 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.733948 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 15 01:08:07.733999 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.734052 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 15 01:08:07.734101 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.734151 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 15 01:08:07.734200 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.734249 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 15 01:08:07.734298 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.734354 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 15 01:08:07.734402 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 15 01:08:07.734411 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 01:08:07.734418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 01:08:07.734424 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 01:08:07.734431 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 15 01:08:07.734439 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 01:08:07.734445 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 01:08:07.734498 kernel: rtc_cmos 00:01: registered as rtc0 May 15 01:08:07.734544 kernel: rtc_cmos 00:01: setting system clock to 2025-05-15T01:08:07 UTC (1747271287) May 15 01:08:07.734587 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 15 01:08:07.734596 kernel: intel_pstate: CPU model not supported May 15 01:08:07.734603 kernel: NET: Registered PF_INET6 protocol family May 15 01:08:07.734609 kernel: Segment Routing with IPv6 May 15 01:08:07.734615 kernel: In-situ OAM (IOAM) with IPv6 May 15 01:08:07.734623 kernel: NET: Registered PF_PACKET protocol family May 15 01:08:07.734630 kernel: Key type dns_resolver registered May 15 01:08:07.734636 kernel: IPI shorthand broadcast: enabled May 15 01:08:07.734651 kernel: sched_clock: Marking stable (844016843, 224809926)->(1136676985, -67850216) May 15 01:08:07.734658 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 01:08:07.734665 kernel: registered taskstats version 1 May 15 01:08:07.734671 kernel: Loading compiled-in X.509 certificates May 15 01:08:07.734678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: a3400373b5c34ccb74f940604f224840f2b40bdd' May 15 01:08:07.734684 kernel: Key type .fscrypt registered May 15 01:08:07.734692 kernel: Key type fscrypt-provisioning registered May 15 01:08:07.734698 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 01:08:07.734705 kernel: ima: Allocated hash algorithm: sha1 May 15 01:08:07.734711 kernel: ima: No architecture policies found May 15 01:08:07.734717 kernel: clk: Disabling unused clocks May 15 01:08:07.734724 kernel: Freeing unused kernel image (initmem) memory: 47456K May 15 01:08:07.734730 kernel: Write protecting the kernel read-only data: 28672k May 15 01:08:07.734814 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 01:08:07.734823 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 01:08:07.734830 kernel: Run /init as init process May 15 01:08:07.734836 kernel: with arguments: May 15 01:08:07.735128 kernel: /init May 15 01:08:07.735136 kernel: with environment: May 15 01:08:07.735144 kernel: HOME=/ May 15 01:08:07.735150 kernel: TERM=linux May 15 01:08:07.735156 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 01:08:07.735164 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 01:08:07.735175 systemd[1]: Detected virtualization vmware. May 15 01:08:07.735182 systemd[1]: Detected architecture x86-64. May 15 01:08:07.735191 systemd[1]: Running in initrd. May 15 01:08:07.735201 systemd[1]: No hostname configured, using default hostname. May 15 01:08:07.735221 systemd[1]: Hostname set to . May 15 01:08:07.735231 systemd[1]: Initializing machine ID from random generator. May 15 01:08:07.735239 systemd[1]: Queued start job for default target initrd.target. May 15 01:08:07.735246 systemd[1]: Started systemd-ask-password-console.path. May 15 01:08:07.735254 systemd[1]: Reached target cryptsetup.target. May 15 01:08:07.735260 systemd[1]: Reached target paths.target. May 15 01:08:07.735267 systemd[1]: Reached target slices.target. May 15 01:08:07.735273 systemd[1]: Reached target swap.target. May 15 01:08:07.735279 systemd[1]: Reached target timers.target. May 15 01:08:07.735286 systemd[1]: Listening on iscsid.socket. May 15 01:08:07.735293 systemd[1]: Listening on iscsiuio.socket. May 15 01:08:07.735301 systemd[1]: Listening on systemd-journald-audit.socket. May 15 01:08:07.735307 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 01:08:07.735313 systemd[1]: Listening on systemd-journald.socket. May 15 01:08:07.735320 systemd[1]: Listening on systemd-networkd.socket. May 15 01:08:07.735326 systemd[1]: Listening on systemd-udevd-control.socket. May 15 01:08:07.735517 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 01:08:07.735528 systemd[1]: Reached target sockets.target. May 15 01:08:07.735535 systemd[1]: Starting kmod-static-nodes.service... May 15 01:08:07.735542 systemd[1]: Finished network-cleanup.service. May 15 01:08:07.735551 systemd[1]: Starting systemd-fsck-usr.service... May 15 01:08:07.735558 systemd[1]: Starting systemd-journald.service... May 15 01:08:07.735564 systemd[1]: Starting systemd-modules-load.service... May 15 01:08:07.735571 systemd[1]: Starting systemd-resolved.service... May 15 01:08:07.735577 systemd[1]: Starting systemd-vconsole-setup.service... May 15 01:08:07.735584 systemd[1]: Finished kmod-static-nodes.service. May 15 01:08:07.735590 systemd[1]: Finished systemd-fsck-usr.service. May 15 01:08:07.735597 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 01:08:07.735603 systemd[1]: Finished systemd-vconsole-setup.service. May 15 01:08:07.735611 kernel: audit: type=1130 audit(1747271287.658:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.735618 systemd[1]: Starting dracut-cmdline-ask.service... May 15 01:08:07.735625 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 01:08:07.735631 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 01:08:07.735638 kernel: audit: type=1130 audit(1747271287.693:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.735645 systemd[1]: Finished dracut-cmdline-ask.service. May 15 01:08:07.735651 kernel: Bridge firewalling registered May 15 01:08:07.735658 kernel: audit: type=1130 audit(1747271287.700:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.735665 systemd[1]: Starting dracut-cmdline.service... May 15 01:08:07.735672 kernel: SCSI subsystem initialized May 15 01:08:07.735679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 01:08:07.735685 systemd[1]: Started systemd-resolved.service. May 15 01:08:07.735692 systemd[1]: Reached target nss-lookup.target. May 15 01:08:07.735979 kernel: audit: type=1130 audit(1747271287.727:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.735989 kernel: device-mapper: uevent: version 1.0.3 May 15 01:08:07.735996 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 01:08:07.736007 systemd-journald[217]: Journal started May 15 01:08:07.736043 systemd-journald[217]: Runtime Journal (/run/log/journal/c771d5b077bc47c98fa3e88a87a8a584) is 4.8M, max 38.8M, 34.0M free. May 15 01:08:07.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.658519 systemd-modules-load[218]: Inserted module 'overlay' May 15 01:08:07.741773 systemd[1]: Started systemd-journald.service. May 15 01:08:07.741789 kernel: audit: type=1130 audit(1747271287.735:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.701516 systemd-modules-load[218]: Inserted module 'br_netfilter' May 15 01:08:07.724527 systemd-resolved[219]: Positive Trust Anchors: May 15 01:08:07.724532 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 01:08:07.724551 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 01:08:07.750863 kernel: audit: type=1130 audit(1747271287.741:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.727238 systemd-resolved[219]: Defaulting to hostname 'linux'. May 15 01:08:07.742456 systemd-modules-load[218]: Inserted module 'dm_multipath' May 15 01:08:07.742825 systemd[1]: Finished systemd-modules-load.service. May 15 01:08:07.751694 dracut-cmdline[233]: dracut-dracut-053 May 15 01:08:07.751694 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 15 01:08:07.751694 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 01:08:07.744087 systemd[1]: Starting systemd-sysctl.service... May 15 01:08:07.752878 systemd[1]: Finished systemd-sysctl.service. May 15 01:08:07.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.755893 kernel: audit: type=1130 audit(1747271287.751:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.773750 kernel: Loading iSCSI transport class v2.0-870. May 15 01:08:07.785745 kernel: iscsi: registered transport (tcp) May 15 01:08:07.800749 kernel: iscsi: registered transport (qla4xxx) May 15 01:08:07.800781 kernel: QLogic iSCSI HBA Driver May 15 01:08:07.817207 systemd[1]: Finished dracut-cmdline.service. May 15 01:08:07.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.817931 systemd[1]: Starting dracut-pre-udev.service... May 15 01:08:07.820767 kernel: audit: type=1130 audit(1747271287.815:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:07.858769 kernel: raid6: avx2x4 gen() 41104 MB/s May 15 01:08:07.875761 kernel: raid6: avx2x4 xor() 19865 MB/s May 15 01:08:07.892750 kernel: raid6: avx2x2 gen() 52895 MB/s May 15 01:08:07.909750 kernel: raid6: avx2x2 xor() 31602 MB/s May 15 01:08:07.926759 kernel: raid6: avx2x1 gen() 44556 MB/s May 15 01:08:07.943753 kernel: raid6: avx2x1 xor() 26885 MB/s May 15 01:08:07.960749 kernel: raid6: sse2x4 gen() 21045 MB/s May 15 01:08:07.977749 kernel: raid6: sse2x4 xor() 11948 MB/s May 15 01:08:07.994761 kernel: raid6: sse2x2 gen() 20613 MB/s May 15 01:08:08.011830 kernel: raid6: sse2x2 xor() 11790 MB/s May 15 01:08:08.028760 kernel: raid6: sse2x1 gen() 16018 MB/s May 15 01:08:08.045939 kernel: raid6: sse2x1 xor() 8859 MB/s May 15 01:08:08.045985 kernel: raid6: using algorithm avx2x2 gen() 52895 MB/s May 15 01:08:08.045995 kernel: raid6: .... xor() 31602 MB/s, rmw enabled May 15 01:08:08.047121 kernel: raid6: using avx2x2 recovery algorithm May 15 01:08:08.055747 kernel: xor: automatically using best checksumming function avx May 15 01:08:08.114829 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 01:08:08.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:08.119428 systemd[1]: Finished dracut-pre-udev.service. May 15 01:08:08.120037 systemd[1]: Starting systemd-udevd.service... May 15 01:08:08.122783 kernel: audit: type=1130 audit(1747271288.117:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:08.118000 audit: BPF prog-id=7 op=LOAD May 15 01:08:08.118000 audit: BPF prog-id=8 op=LOAD May 15 01:08:08.130255 systemd-udevd[416]: Using default interface naming scheme 'v252'. May 15 01:08:08.132928 systemd[1]: Started systemd-udevd.service. May 15 01:08:08.133389 systemd[1]: Starting dracut-pre-trigger.service... May 15 01:08:08.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:08.141009 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 15 01:08:08.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:08.156653 systemd[1]: Finished dracut-pre-trigger.service. May 15 01:08:08.157177 systemd[1]: Starting systemd-udev-trigger.service... May 15 01:08:08.235708 systemd[1]: Finished systemd-udev-trigger.service. May 15 01:08:08.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:08.290054 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 15 01:08:08.290087 kernel: vmw_pvscsi: using 64bit dma May 15 01:08:08.295742 kernel: vmw_pvscsi: max_id: 16 May 15 01:08:08.295768 kernel: vmw_pvscsi: setting ring_pages to 8 May 15 01:08:08.307744 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI May 15 01:08:08.312293 kernel: vmw_pvscsi: enabling reqCallThreshold May 15 01:08:08.312310 kernel: vmw_pvscsi: driver-based request coalescing enabled May 15 01:08:08.312326 kernel: vmw_pvscsi: using MSI-X May 15 01:08:08.313589 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 15 01:08:08.314741 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 15 01:08:08.314827 kernel: libata version 3.00 loaded. May 15 01:08:08.314836 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 15 01:08:08.318375 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 15 01:08:08.321766 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 15 01:08:08.321844 kernel: cryptd: max_cpu_qlen set to 1000 May 15 01:08:08.325747 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 15 01:08:08.330764 kernel: AVX2 version of gcm_enc/dec engaged. May 15 01:08:08.330782 kernel: AES CTR mode by8 optimization enabled May 15 01:08:08.336747 kernel: ata_piix 0000:00:07.1: version 2.13 May 15 01:08:08.347604 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 15 01:08:08.368074 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 01:08:08.368179 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 15 01:08:08.368254 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 15 01:08:08.368319 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 15 01:08:08.368389 kernel: scsi host1: ata_piix May 15 01:08:08.368453 kernel: scsi host2: ata_piix May 15 01:08:08.368509 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 15 01:08:08.368517 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 15 01:08:08.368525 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 01:08:08.368532 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 01:08:08.517753 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 15 01:08:08.521800 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 15 01:08:08.552948 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 15 01:08:08.575510 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 01:08:08.575523 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 01:08:08.589750 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (472) May 15 01:08:08.590979 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 01:08:08.595679 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 01:08:08.600569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 01:08:08.600775 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 01:08:08.603995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 01:08:08.604856 systemd[1]: Starting disk-uuid.service... May 15 01:08:08.697752 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 01:08:08.732758 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 01:08:09.746660 disk-uuid[550]: The operation has completed successfully. May 15 01:08:09.747022 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 01:08:09.895872 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 01:08:09.895951 systemd[1]: Finished disk-uuid.service. May 15 01:08:09.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:09.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:09.896693 systemd[1]: Starting verity-setup.service... May 15 01:08:09.934761 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 15 01:08:09.983353 systemd[1]: Found device dev-mapper-usr.device. May 15 01:08:09.984788 systemd[1]: Mounting sysusr-usr.mount... May 15 01:08:09.985077 systemd[1]: Finished verity-setup.service. May 15 01:08:09.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.222332 systemd[1]: Mounted sysusr-usr.mount. May 15 01:08:10.222748 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 01:08:10.222926 systemd[1]: Starting afterburn-network-kargs.service... May 15 01:08:10.223386 systemd[1]: Starting ignition-setup.service... May 15 01:08:10.358675 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 01:08:10.358720 kernel: BTRFS info (device sda6): using free space tree May 15 01:08:10.358743 kernel: BTRFS info (device sda6): has skinny extents May 15 01:08:10.368272 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 01:08:10.377254 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 01:08:10.386795 systemd[1]: Finished ignition-setup.service. May 15 01:08:10.387651 systemd[1]: Starting ignition-fetch-offline.service... May 15 01:08:10.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.560126 systemd[1]: Finished afterburn-network-kargs.service. May 15 01:08:10.560877 systemd[1]: Starting parse-ip-for-networkd.service... May 15 01:08:10.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.624567 systemd[1]: Finished parse-ip-for-networkd.service. May 15 01:08:10.623000 audit: BPF prog-id=9 op=LOAD May 15 01:08:10.625746 systemd[1]: Starting systemd-networkd.service... May 15 01:08:10.645516 systemd-networkd[735]: lo: Link UP May 15 01:08:10.645522 systemd-networkd[735]: lo: Gained carrier May 15 01:08:10.645820 systemd-networkd[735]: Enumeration completed May 15 01:08:10.646022 systemd-networkd[735]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 15 01:08:10.649754 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 15 01:08:10.649882 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 15 01:08:10.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.646436 systemd[1]: Started systemd-networkd.service. May 15 01:08:10.646629 systemd[1]: Reached target network.target. May 15 01:08:10.647339 systemd[1]: Starting iscsiuio.service... May 15 01:08:10.648668 systemd-networkd[735]: ens192: Link UP May 15 01:08:10.648670 systemd-networkd[735]: ens192: Gained carrier May 15 01:08:10.653676 systemd[1]: Started iscsiuio.service. May 15 01:08:10.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.654891 systemd[1]: Starting iscsid.service... May 15 01:08:10.657111 iscsid[740]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 01:08:10.657111 iscsid[740]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 01:08:10.657111 iscsid[740]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 01:08:10.657111 iscsid[740]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 01:08:10.657939 iscsid[740]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 01:08:10.657939 iscsid[740]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 01:08:10.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.658231 systemd[1]: Started iscsid.service. May 15 01:08:10.658803 systemd[1]: Starting dracut-initqueue.service... May 15 01:08:10.666341 systemd[1]: Finished dracut-initqueue.service. May 15 01:08:10.666486 systemd[1]: Reached target remote-fs-pre.target. May 15 01:08:10.666581 systemd[1]: Reached target remote-cryptsetup.target. May 15 01:08:10.666692 systemd[1]: Reached target remote-fs.target. May 15 01:08:10.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.667212 systemd[1]: Starting dracut-pre-mount.service... May 15 01:08:10.672869 systemd[1]: Finished dracut-pre-mount.service. May 15 01:08:10.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.818038 ignition[607]: Ignition 2.14.0 May 15 01:08:10.818366 ignition[607]: Stage: fetch-offline May 15 01:08:10.818555 ignition[607]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 01:08:10.818750 ignition[607]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 01:08:10.824117 ignition[607]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 01:08:10.824455 ignition[607]: parsed url from cmdline: "" May 15 01:08:10.824504 ignition[607]: no config URL provided May 15 01:08:10.824642 ignition[607]: reading system config file "/usr/lib/ignition/user.ign" May 15 01:08:10.824807 ignition[607]: no config at "/usr/lib/ignition/user.ign" May 15 01:08:10.825344 ignition[607]: config successfully fetched May 15 01:08:10.825405 ignition[607]: parsing config with SHA512: 6eddf2fac60037d5bf2fb6ba8de2e456e2c9682d3fe032285743fe6bf126b009a7d7028cb6dd595555142b286147f44b46324262e542175336783277e1aba466 May 15 01:08:10.893115 unknown[607]: fetched base config from "system" May 15 01:08:10.893125 unknown[607]: fetched user config from "vmware" May 15 01:08:10.893639 ignition[607]: fetch-offline: fetch-offline passed May 15 01:08:10.893717 ignition[607]: Ignition finished successfully May 15 01:08:10.894398 systemd[1]: Finished ignition-fetch-offline.service. May 15 01:08:10.894586 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 01:08:10.895170 systemd[1]: Starting ignition-kargs.service... May 15 01:08:10.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.902901 ignition[755]: Ignition 2.14.0 May 15 01:08:10.902910 ignition[755]: Stage: kargs May 15 01:08:10.902988 ignition[755]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 01:08:10.903000 ignition[755]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 01:08:10.904630 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 01:08:10.916280 ignition[755]: kargs: kargs passed May 15 01:08:10.916319 ignition[755]: Ignition finished successfully May 15 01:08:10.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.917596 systemd[1]: Finished ignition-kargs.service. May 15 01:08:10.918328 systemd[1]: Starting ignition-disks.service... May 15 01:08:10.924115 ignition[761]: Ignition 2.14.0 May 15 01:08:10.924124 ignition[761]: Stage: disks May 15 01:08:10.924204 ignition[761]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 01:08:10.924217 ignition[761]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 01:08:10.925880 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 01:08:10.927722 ignition[761]: disks: disks passed May 15 01:08:10.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:10.928326 systemd[1]: Finished ignition-disks.service. May 15 01:08:10.927764 ignition[761]: Ignition finished successfully May 15 01:08:10.928521 systemd[1]: Reached target initrd-root-device.target. May 15 01:08:10.928637 systemd[1]: Reached target local-fs-pre.target. May 15 01:08:10.928752 systemd[1]: Reached target local-fs.target. May 15 01:08:10.928853 systemd[1]: Reached target sysinit.target. May 15 01:08:10.928952 systemd[1]: Reached target basic.target. May 15 01:08:10.929975 systemd[1]: Starting systemd-fsck-root.service... May 15 01:08:11.011073 systemd-fsck[769]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 15 01:08:11.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:11.012744 systemd[1]: Finished systemd-fsck-root.service. May 15 01:08:11.013621 systemd[1]: Mounting sysroot.mount... May 15 01:08:11.022765 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 01:08:11.022606 systemd[1]: Mounted sysroot.mount. May 15 01:08:11.023034 systemd[1]: Reached target initrd-root-fs.target. May 15 01:08:11.024413 systemd[1]: Mounting sysroot-usr.mount... May 15 01:08:11.025283 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 01:08:11.025542 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 01:08:11.025825 systemd[1]: Reached target ignition-diskful.target. May 15 01:08:11.027173 systemd[1]: Mounted sysroot-usr.mount. May 15 01:08:11.028027 systemd[1]: Starting initrd-setup-root.service... May 15 01:08:11.031753 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory May 15 01:08:11.036388 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory May 15 01:08:11.039064 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory May 15 01:08:11.042053 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory May 15 01:08:11.171286 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 01:08:11.421192 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (811) May 15 01:08:11.421241 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 01:08:11.421254 kernel: BTRFS info (device sda6): using free space tree May 15 01:08:11.422983 kernel: BTRFS info (device sda6): has skinny extents May 15 01:08:11.426753 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 01:08:11.429478 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 01:08:11.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:11.435746 systemd[1]: Finished initrd-setup-root.service. May 15 01:08:11.436432 systemd[1]: Starting ignition-mount.service... May 15 01:08:11.438851 systemd[1]: Starting sysroot-boot.service... May 15 01:08:11.443842 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 15 01:08:11.444137 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 15 01:08:11.451381 ignition[840]: INFO : Ignition 2.14.0 May 15 01:08:11.452030 ignition[840]: INFO : Stage: mount May 15 01:08:11.452272 ignition[840]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 01:08:11.452461 ignition[840]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 01:08:11.454572 ignition[840]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 01:08:11.456615 systemd[1]: Finished sysroot-boot.service. May 15 01:08:11.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:11.460280 ignition[840]: INFO : mount: mount passed May 15 01:08:11.460483 ignition[840]: INFO : Ignition finished successfully May 15 01:08:11.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:11.461167 systemd[1]: Finished ignition-mount.service. May 15 01:08:11.461817 systemd[1]: Starting ignition-files.service... May 15 01:08:11.467307 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 01:08:11.605764 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (850) May 15 01:08:11.621841 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 01:08:11.621906 kernel: BTRFS info (device sda6): using free space tree May 15 01:08:11.621919 kernel: BTRFS info (device sda6): has skinny extents May 15 01:08:11.628764 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 01:08:11.631523 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 01:08:11.640800 ignition[869]: INFO : Ignition 2.14.0 May 15 01:08:11.640800 ignition[869]: INFO : Stage: files May 15 01:08:11.641335 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 01:08:11.641335 ignition[869]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 01:08:11.643100 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 01:08:11.648400 ignition[869]: DEBUG : files: compiled without relabeling support, skipping May 15 01:08:11.648869 ignition[869]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 01:08:11.648869 ignition[869]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 01:08:11.651090 ignition[869]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 01:08:11.651369 ignition[869]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 01:08:11.652185 unknown[869]: wrote ssh authorized keys file for user: core May 15 01:08:11.652815 ignition[869]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 01:08:11.653263 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 01:08:11.653636 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 01:08:11.653636 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 01:08:11.653636 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 01:08:11.710949 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 01:08:11.834942 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 01:08:11.835265 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 01:08:11.835265 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 01:08:11.835265 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 01:08:11.835908 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 01:08:11.835908 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 01:08:11.835908 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 01:08:11.835908 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 01:08:11.841129 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 01:08:11.841400 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 01:08:11.841400 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 01:08:11.842821 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 01:08:11.842821 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 01:08:11.842821 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 15 01:08:11.842821 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 15 01:08:11.848302 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1784574571" May 15 01:08:11.848597 ignition[869]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1784574571": device or resource busy May 15 01:08:11.848869 ignition[869]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1784574571", trying btrfs: device or resource busy May 15 01:08:11.849096 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1784574571" May 15 01:08:11.849385 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1784574571" May 15 01:08:11.859901 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1784574571" May 15 01:08:11.860265 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1784574571" May 15 01:08:11.860487 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 15 01:08:11.860710 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 01:08:11.861015 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 01:08:12.374727 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK May 15 01:08:12.486893 systemd-networkd[735]: ens192: Gained IPv6LL May 15 01:08:12.667801 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 01:08:12.675282 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 15 01:08:12.675533 ignition[869]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 15 01:08:12.675533 ignition[869]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" May 15 01:08:12.675533 ignition[869]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" May 15 01:08:12.675533 ignition[869]: INFO : files: op(12): [started] processing unit "containerd.service" May 15 01:08:12.675533 ignition[869]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 01:08:12.675533 ignition[869]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 01:08:12.675533 ignition[869]: INFO : files: op(12): [finished] processing unit "containerd.service" May 15 01:08:12.675533 ignition[869]: INFO : files: op(14): [started] processing unit "prepare-helm.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" May 15 01:08:12.677107 ignition[869]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 01:08:13.561269 ignition[869]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 01:08:13.561497 ignition[869]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" May 15 01:08:13.561497 ignition[869]: INFO : files: op(1a): [started] setting preset to enabled for "vmtoolsd.service" May 15 01:08:13.561497 ignition[869]: INFO : files: op(1a): [finished] setting preset to enabled for "vmtoolsd.service" May 15 01:08:13.561497 ignition[869]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" May 15 01:08:13.561497 ignition[869]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" May 15 01:08:13.562176 ignition[869]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 01:08:13.562176 ignition[869]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 01:08:13.572434 ignition[869]: INFO : files: files passed May 15 01:08:13.572553 ignition[869]: INFO : Ignition finished successfully May 15 01:08:13.573236 systemd[1]: Finished ignition-files.service. May 15 01:08:13.576730 kernel: kauditd_printk_skb: 24 callbacks suppressed May 15 01:08:13.576763 kernel: audit: type=1130 audit(1747271293.571:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.574541 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 01:08:13.576520 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 01:08:13.576971 systemd[1]: Starting ignition-quench.service... May 15 01:08:13.592943 initrd-setup-root-after-ignition[895]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 01:08:13.594750 kernel: audit: type=1130 audit(1747271293.591:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.593514 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 01:08:13.593685 systemd[1]: Reached target ignition-complete.target. May 15 01:08:13.597022 systemd[1]: Starting initrd-parse-etc.service... May 15 01:08:13.597272 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 01:08:13.597327 systemd[1]: Finished ignition-quench.service. May 15 01:08:13.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.602760 kernel: audit: type=1130 audit(1747271293.595:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.602782 kernel: audit: type=1131 audit(1747271293.595:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.609117 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 01:08:13.609339 systemd[1]: Finished initrd-parse-etc.service. May 15 01:08:13.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.609672 systemd[1]: Reached target initrd-fs.target. May 15 01:08:13.614540 kernel: audit: type=1130 audit(1747271293.607:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.614554 kernel: audit: type=1131 audit(1747271293.607:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.614710 systemd[1]: Reached target initrd.target. May 15 01:08:13.614990 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 01:08:13.615652 systemd[1]: Starting dracut-pre-pivot.service... May 15 01:08:13.623010 systemd[1]: Finished dracut-pre-pivot.service. May 15 01:08:13.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.623849 systemd[1]: Starting initrd-cleanup.service... May 15 01:08:13.626898 kernel: audit: type=1130 audit(1747271293.621:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.631605 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 01:08:13.631658 systemd[1]: Finished initrd-cleanup.service. May 15 01:08:13.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.632271 systemd[1]: Stopped target nss-lookup.target. May 15 01:08:13.637023 kernel: audit: type=1130 audit(1747271293.630:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.637040 kernel: audit: type=1131 audit(1747271293.630:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.636923 systemd[1]: Stopped target remote-cryptsetup.target. May 15 01:08:13.637086 systemd[1]: Stopped target timers.target. May 15 01:08:13.639770 kernel: audit: type=1131 audit(1747271293.635:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.637203 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 01:08:13.637233 systemd[1]: Stopped dracut-pre-pivot.service. May 15 01:08:13.637380 systemd[1]: Stopped target initrd.target. May 15 01:08:13.639836 systemd[1]: Stopped target basic.target. May 15 01:08:13.640017 systemd[1]: Stopped target ignition-complete.target. May 15 01:08:13.640184 systemd[1]: Stopped target ignition-diskful.target. May 15 01:08:13.640349 systemd[1]: Stopped target initrd-root-device.target. May 15 01:08:13.640521 systemd[1]: Stopped target remote-fs.target. May 15 01:08:13.640684 systemd[1]: Stopped target remote-fs-pre.target. May 15 01:08:13.640977 systemd[1]: Stopped target sysinit.target. May 15 01:08:13.641133 systemd[1]: Stopped target local-fs.target. May 15 01:08:13.641288 systemd[1]: Stopped target local-fs-pre.target. May 15 01:08:13.641448 systemd[1]: Stopped target swap.target. May 15 01:08:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.641607 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 01:08:13.641632 systemd[1]: Stopped dracut-pre-mount.service. May 15 01:08:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.641796 systemd[1]: Stopped target cryptsetup.target. May 15 01:08:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.641943 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 01:08:13.641966 systemd[1]: Stopped dracut-initqueue.service. May 15 01:08:13.642150 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 01:08:13.642171 systemd[1]: Stopped ignition-fetch-offline.service. May 15 01:08:13.642294 systemd[1]: Stopped target paths.target. May 15 01:08:13.642432 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 01:08:13.643761 systemd[1]: Stopped systemd-ask-password-console.path. May 15 01:08:13.643871 systemd[1]: Stopped target slices.target. May 15 01:08:13.644052 systemd[1]: Stopped target sockets.target. May 15 01:08:13.644223 systemd[1]: iscsid.socket: Deactivated successfully. May 15 01:08:13.644238 systemd[1]: Closed iscsid.socket. May 15 01:08:13.644369 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 01:08:13.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.644383 systemd[1]: Closed iscsiuio.socket. May 15 01:08:13.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.644545 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 01:08:13.644567 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 01:08:13.644712 systemd[1]: ignition-files.service: Deactivated successfully. May 15 01:08:13.644739 systemd[1]: Stopped ignition-files.service. May 15 01:08:13.645279 systemd[1]: Stopping ignition-mount.service... May 15 01:08:13.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.645413 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 01:08:13.645444 systemd[1]: Stopped kmod-static-nodes.service. May 15 01:08:13.645987 systemd[1]: Stopping sysroot-boot.service... May 15 01:08:13.646107 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 01:08:13.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.646136 systemd[1]: Stopped systemd-udev-trigger.service. May 15 01:08:13.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.646289 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 01:08:13.646311 systemd[1]: Stopped dracut-pre-trigger.service. May 15 01:08:13.650874 ignition[908]: INFO : Ignition 2.14.0 May 15 01:08:13.650874 ignition[908]: INFO : Stage: umount May 15 01:08:13.651170 ignition[908]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 01:08:13.651170 ignition[908]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 15 01:08:13.652185 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 15 01:08:13.653693 ignition[908]: INFO : umount: umount passed May 15 01:08:13.653918 ignition[908]: INFO : Ignition finished successfully May 15 01:08:13.654441 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 01:08:13.654647 systemd[1]: Stopped ignition-mount.service. May 15 01:08:13.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.654976 systemd[1]: Stopped target network.target. May 15 01:08:13.655181 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 01:08:13.655332 systemd[1]: Stopped ignition-disks.service. May 15 01:08:13.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.655595 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 01:08:13.655795 systemd[1]: Stopped ignition-kargs.service. May 15 01:08:13.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.656055 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 01:08:13.656206 systemd[1]: Stopped ignition-setup.service. May 15 01:08:13.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.656683 systemd[1]: Stopping systemd-networkd.service... May 15 01:08:13.657026 systemd[1]: Stopping systemd-resolved.service... May 15 01:08:13.661514 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 01:08:13.661715 systemd[1]: Stopped systemd-resolved.service. May 15 01:08:13.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.662835 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 01:08:13.663029 systemd[1]: Stopped systemd-networkd.service. May 15 01:08:13.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.663580 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 01:08:13.663742 systemd[1]: Closed systemd-networkd.socket. May 15 01:08:13.664342 systemd[1]: Stopping network-cleanup.service... May 15 01:08:13.664628 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 01:08:13.664799 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 01:08:13.665048 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 15 01:08:13.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.665257 systemd[1]: Stopped afterburn-network-kargs.service. May 15 01:08:13.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.665530 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 01:08:13.665682 systemd[1]: Stopped systemd-sysctl.service. May 15 01:08:13.664000 audit: BPF prog-id=6 op=UNLOAD May 15 01:08:13.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.666088 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 01:08:13.666252 systemd[1]: Stopped systemd-modules-load.service. May 15 01:08:13.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.665000 audit: BPF prog-id=9 op=UNLOAD May 15 01:08:13.668495 systemd[1]: Stopping systemd-udevd.service... May 15 01:08:13.670826 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 01:08:13.671286 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 01:08:13.671490 systemd[1]: Stopped systemd-udevd.service. May 15 01:08:13.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.672234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 01:08:13.672448 systemd[1]: Closed systemd-udevd-control.socket. May 15 01:08:13.672682 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 01:08:13.672851 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 01:08:13.673072 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 01:08:13.673223 systemd[1]: Stopped dracut-pre-udev.service. May 15 01:08:13.673462 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 01:08:13.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.673642 systemd[1]: Stopped dracut-cmdline.service. May 15 01:08:13.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.673922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 01:08:13.674076 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 01:08:13.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.674682 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 01:08:13.674965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 01:08:13.675130 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 01:08:13.675532 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 01:08:13.675875 systemd[1]: Stopped network-cleanup.service. May 15 01:08:13.677041 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 01:08:13.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.678026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 01:08:13.678221 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 01:08:13.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.792645 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 01:08:13.792892 systemd[1]: Stopped sysroot-boot.service. May 15 01:08:13.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.793240 systemd[1]: Reached target initrd-switch-root.target. May 15 01:08:13.793458 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 01:08:13.793611 systemd[1]: Stopped initrd-setup-root.service. May 15 01:08:13.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:13.794286 systemd[1]: Starting initrd-switch-root.service... May 15 01:08:13.831622 systemd[1]: Switching root. May 15 01:08:13.832000 audit: BPF prog-id=5 op=UNLOAD May 15 01:08:13.832000 audit: BPF prog-id=4 op=UNLOAD May 15 01:08:13.832000 audit: BPF prog-id=3 op=UNLOAD May 15 01:08:13.833000 audit: BPF prog-id=8 op=UNLOAD May 15 01:08:13.833000 audit: BPF prog-id=7 op=UNLOAD May 15 01:08:13.849009 iscsid[740]: iscsid shutting down. May 15 01:08:13.849216 systemd-journald[217]: Journal stopped May 15 01:08:19.607259 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 15 01:08:19.607280 kernel: SELinux: Class mctp_socket not defined in policy. May 15 01:08:19.607289 kernel: SELinux: Class anon_inode not defined in policy. May 15 01:08:19.607295 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 01:08:19.607301 kernel: SELinux: policy capability network_peer_controls=1 May 15 01:08:19.607308 kernel: SELinux: policy capability open_perms=1 May 15 01:08:19.607314 kernel: SELinux: policy capability extended_socket_class=1 May 15 01:08:19.607320 kernel: SELinux: policy capability always_check_network=0 May 15 01:08:19.607326 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 01:08:19.607334 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 01:08:19.607340 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 01:08:19.607348 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 01:08:19.607360 systemd[1]: Successfully loaded SELinux policy in 146.426ms. May 15 01:08:19.607370 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.551ms. May 15 01:08:19.607381 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 01:08:19.607388 systemd[1]: Detected virtualization vmware. May 15 01:08:19.607396 systemd[1]: Detected architecture x86-64. May 15 01:08:19.607406 systemd[1]: Detected first boot. May 15 01:08:19.607418 systemd[1]: Initializing machine ID from random generator. May 15 01:08:19.607429 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 01:08:19.607440 systemd[1]: Populated /etc with preset unit settings. May 15 01:08:19.607451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 01:08:19.607463 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 01:08:19.607475 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:08:19.607489 systemd[1]: Queued start job for default target multi-user.target. May 15 01:08:19.607500 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 15 01:08:19.607507 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 01:08:19.607514 systemd[1]: Created slice system-addon\x2drun.slice. May 15 01:08:19.607523 systemd[1]: Created slice system-getty.slice. May 15 01:08:19.607534 systemd[1]: Created slice system-modprobe.slice. May 15 01:08:19.607543 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 01:08:19.607552 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 01:08:19.607559 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 01:08:19.607566 systemd[1]: Created slice user.slice. May 15 01:08:19.607573 systemd[1]: Started systemd-ask-password-console.path. May 15 01:08:19.607580 systemd[1]: Started systemd-ask-password-wall.path. May 15 01:08:19.607587 systemd[1]: Set up automount boot.automount. May 15 01:08:19.607593 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 01:08:19.607599 systemd[1]: Reached target integritysetup.target. May 15 01:08:19.607606 systemd[1]: Reached target remote-cryptsetup.target. May 15 01:08:19.607615 systemd[1]: Reached target remote-fs.target. May 15 01:08:19.607622 systemd[1]: Reached target slices.target. May 15 01:08:19.607629 systemd[1]: Reached target swap.target. May 15 01:08:19.607636 systemd[1]: Reached target torcx.target. May 15 01:08:19.607643 systemd[1]: Reached target veritysetup.target. May 15 01:08:19.607650 systemd[1]: Listening on systemd-coredump.socket. May 15 01:08:19.607656 systemd[1]: Listening on systemd-initctl.socket. May 15 01:08:19.607663 kernel: kauditd_printk_skb: 46 callbacks suppressed May 15 01:08:19.607670 kernel: audit: type=1400 audit(1747271299.473:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 01:08:19.607677 systemd[1]: Listening on systemd-journald-audit.socket. May 15 01:08:19.607684 kernel: audit: type=1335 audit(1747271299.473:85): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 15 01:08:19.607690 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 01:08:19.607698 systemd[1]: Listening on systemd-journald.socket. May 15 01:08:19.607704 systemd[1]: Listening on systemd-networkd.socket. May 15 01:08:19.607711 systemd[1]: Listening on systemd-udevd-control.socket. May 15 01:08:19.607719 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 01:08:19.607726 systemd[1]: Listening on systemd-userdbd.socket. May 15 01:08:19.607746 systemd[1]: Mounting dev-hugepages.mount... May 15 01:08:19.607755 systemd[1]: Mounting dev-mqueue.mount... May 15 01:08:19.607762 systemd[1]: Mounting media.mount... May 15 01:08:19.607771 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:08:19.607778 systemd[1]: Mounting sys-kernel-debug.mount... May 15 01:08:19.607785 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 01:08:19.607792 systemd[1]: Mounting tmp.mount... May 15 01:08:19.607807 systemd[1]: Starting flatcar-tmpfiles.service... May 15 01:08:19.607814 systemd[1]: Starting ignition-delete-config.service... May 15 01:08:19.607821 systemd[1]: Starting kmod-static-nodes.service... May 15 01:08:19.607830 systemd[1]: Starting modprobe@configfs.service... May 15 01:08:19.607840 systemd[1]: Starting modprobe@dm_mod.service... May 15 01:08:19.607847 systemd[1]: Starting modprobe@drm.service... May 15 01:08:19.607854 systemd[1]: Starting modprobe@efi_pstore.service... May 15 01:08:19.607861 systemd[1]: Starting modprobe@fuse.service... May 15 01:08:19.607872 systemd[1]: Starting modprobe@loop.service... May 15 01:08:19.607885 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 01:08:19.607897 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 01:08:19.607909 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 15 01:08:19.607917 systemd[1]: Starting systemd-journald.service... May 15 01:08:19.607926 systemd[1]: Starting systemd-modules-load.service... May 15 01:08:19.607933 systemd[1]: Starting systemd-network-generator.service... May 15 01:08:19.607940 systemd[1]: Starting systemd-remount-fs.service... May 15 01:08:19.607946 systemd[1]: Starting systemd-udev-trigger.service... May 15 01:08:19.607953 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:08:19.607960 systemd[1]: Mounted dev-hugepages.mount. May 15 01:08:19.607967 systemd[1]: Mounted dev-mqueue.mount. May 15 01:08:19.607974 systemd[1]: Mounted media.mount. May 15 01:08:19.607981 systemd[1]: Mounted sys-kernel-debug.mount. May 15 01:08:19.607988 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 01:08:19.607996 systemd[1]: Mounted tmp.mount. May 15 01:08:19.608003 systemd[1]: Finished kmod-static-nodes.service. May 15 01:08:19.608010 kernel: audit: type=1130 audit(1747271299.550:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:08:19.608024 systemd[1]: Finished modprobe@dm_mod.service. May 15 01:08:19.608031 kernel: audit: type=1130 audit(1747271299.560:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608038 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 01:08:19.608046 kernel: audit: type=1131 audit(1747271299.560:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608053 systemd[1]: Finished modprobe@drm.service. May 15 01:08:19.608060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:08:19.608067 kernel: audit: type=1130 audit(1747271299.569:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608074 systemd[1]: Finished modprobe@efi_pstore.service. May 15 01:08:19.608081 kernel: audit: type=1131 audit(1747271299.569:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608087 systemd[1]: Finished systemd-modules-load.service. May 15 01:08:19.608094 kernel: audit: type=1130 audit(1747271299.576:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608102 kernel: audit: type=1131 audit(1747271299.576:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608108 kernel: audit: type=1130 audit(1747271299.578:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.608115 systemd[1]: Finished systemd-network-generator.service. May 15 01:08:19.608122 systemd[1]: Finished systemd-remount-fs.service. May 15 01:08:19.608128 systemd[1]: Reached target network-pre.target. May 15 01:08:19.608135 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 01:08:19.608145 systemd-journald[1053]: Journal started May 15 01:08:19.608176 systemd-journald[1053]: Runtime Journal (/run/log/journal/a852e83cf7294a278fb912bc2888298a) is 4.8M, max 38.8M, 34.0M free. May 15 01:08:19.473000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 01:08:19.473000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 15 01:08:19.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.603000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 01:08:19.603000 audit[1053]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc6833ecb0 a2=4000 a3=7ffc6833ed4c items=0 ppid=1 pid=1053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:08:19.603000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 01:08:19.609884 jq[1039]: true May 15 01:08:19.612384 jq[1068]: true May 15 01:08:19.615844 systemd[1]: Starting systemd-hwdb-update.service... May 15 01:08:19.615874 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:08:19.615887 systemd[1]: Starting systemd-random-seed.service... May 15 01:08:19.617891 systemd[1]: Starting systemd-sysctl.service... May 15 01:08:19.617913 systemd[1]: Started systemd-journald.service. May 15 01:08:19.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.619591 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 01:08:19.622308 systemd[1]: Finished modprobe@configfs.service. May 15 01:08:19.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.623660 systemd[1]: Mounting sys-kernel-config.mount... May 15 01:08:19.624705 systemd[1]: Starting systemd-journal-flush.service... May 15 01:08:19.625929 systemd[1]: Mounted sys-kernel-config.mount. May 15 01:08:19.633758 kernel: fuse: init (API version 7.34) May 15 01:08:19.640303 kernel: loop: module loaded May 15 01:08:19.638917 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 01:08:19.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.639015 systemd[1]: Finished modprobe@fuse.service. May 15 01:08:19.639256 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:08:19.639330 systemd[1]: Finished modprobe@loop.service. May 15 01:08:19.640397 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 01:08:19.640520 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 01:08:19.642416 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 01:08:19.648599 systemd[1]: Finished flatcar-tmpfiles.service. May 15 01:08:19.649607 systemd[1]: Starting systemd-sysusers.service... May 15 01:08:19.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.688988 systemd[1]: Finished systemd-udev-trigger.service. May 15 01:08:19.690009 systemd[1]: Starting systemd-udev-settle.service... May 15 01:08:19.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.711101 udevadm[1117]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 01:08:19.715685 systemd-journald[1053]: Time spent on flushing to /var/log/journal/a852e83cf7294a278fb912bc2888298a is 25.599ms for 1953 entries. May 15 01:08:19.715685 systemd-journald[1053]: System Journal (/var/log/journal/a852e83cf7294a278fb912bc2888298a) is 8.0M, max 584.8M, 576.8M free. May 15 01:08:19.765121 systemd-journald[1053]: Received client request to flush runtime journal. May 15 01:08:19.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:19.727428 systemd[1]: Finished systemd-random-seed.service. May 15 01:08:19.727587 systemd[1]: Reached target first-boot-complete.target. May 15 01:08:19.763569 systemd[1]: Finished systemd-sysctl.service. May 15 01:08:19.765550 systemd[1]: Finished systemd-journal-flush.service. May 15 01:08:20.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.079631 systemd[1]: Finished systemd-sysusers.service. May 15 01:08:20.080704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 01:08:20.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.535963 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 01:08:20.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.627999 systemd[1]: Finished systemd-hwdb-update.service. May 15 01:08:20.629050 systemd[1]: Starting systemd-udevd.service... May 15 01:08:20.642340 systemd-udevd[1133]: Using default interface naming scheme 'v252'. May 15 01:08:20.705414 ignition[1085]: Ignition 2.14.0 May 15 01:08:20.705864 ignition[1085]: deleting config from guestinfo properties May 15 01:08:20.709514 ignition[1085]: Successfully deleted config May 15 01:08:20.710575 systemd[1]: Finished ignition-delete-config.service. May 15 01:08:20.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.725764 systemd[1]: Started systemd-udevd.service. May 15 01:08:20.727028 systemd[1]: Starting systemd-networkd.service... May 15 01:08:20.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.738532 systemd[1]: Starting systemd-userdbd.service... May 15 01:08:20.759021 systemd[1]: Found device dev-ttyS0.device. May 15 01:08:20.777775 systemd[1]: Started systemd-userdbd.service. May 15 01:08:20.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.793749 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 01:08:20.797753 kernel: ACPI: button: Power Button [PWRF] May 15 01:08:20.888788 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 May 15 01:08:20.912313 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 15 01:08:20.912398 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 15 01:08:20.912472 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 15 01:08:20.912484 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 15 01:08:20.912559 kernel: Guest personality initialized and is active May 15 01:08:20.912570 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 15 01:08:20.885000 audit[1145]: AVC avc: denied { confidentiality } for pid=1145 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 01:08:20.885000 audit[1145]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5595089b9df0 a1=338ac a2=7f2b28f07bc5 a3=5 items=110 ppid=1133 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:08:20.885000 audit: CWD cwd="/" May 15 01:08:20.885000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=1 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=2 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=3 name=(null) inode=24917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=4 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=5 name=(null) inode=24918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=6 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=7 name=(null) inode=24919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=8 name=(null) inode=24919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=9 name=(null) inode=24920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=10 name=(null) inode=24919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=11 name=(null) inode=24921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=12 name=(null) inode=24919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=13 name=(null) inode=24922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=14 name=(null) inode=24919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=15 name=(null) inode=24923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=16 name=(null) inode=24919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=17 name=(null) inode=24924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=18 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=19 name=(null) inode=24925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=20 name=(null) inode=24925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=21 name=(null) inode=24926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=22 name=(null) inode=24925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=23 name=(null) inode=24927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=24 name=(null) inode=24925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=25 name=(null) inode=24928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=26 name=(null) inode=24925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=27 name=(null) inode=24929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=28 name=(null) inode=24925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=29 name=(null) inode=24930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=30 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=31 name=(null) inode=24931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=32 name=(null) inode=24931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=33 name=(null) inode=24932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=34 name=(null) inode=24931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=35 name=(null) inode=24933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=36 name=(null) inode=24931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=37 name=(null) inode=24934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=38 name=(null) inode=24931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=39 name=(null) inode=24935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=40 name=(null) inode=24931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=41 name=(null) inode=24936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=42 name=(null) inode=24916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=43 name=(null) inode=24937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=44 name=(null) inode=24937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=45 name=(null) inode=24938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=46 name=(null) inode=24937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=47 name=(null) inode=24939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=48 name=(null) inode=24937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=49 name=(null) inode=24940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=50 name=(null) inode=24937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=51 name=(null) inode=24941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=52 name=(null) inode=24937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=53 name=(null) inode=24942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=55 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=56 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=57 name=(null) inode=24944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=58 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=59 name=(null) inode=24945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=60 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=61 name=(null) inode=24946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=62 name=(null) inode=24946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=63 name=(null) inode=24947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=64 name=(null) inode=24946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=65 name=(null) inode=24948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=66 name=(null) inode=24946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=67 name=(null) inode=24949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=68 name=(null) inode=24946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=69 name=(null) inode=24950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=70 name=(null) inode=24946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=71 name=(null) inode=24951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=72 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=73 name=(null) inode=24952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=74 name=(null) inode=24952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=75 name=(null) inode=24953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=76 name=(null) inode=24952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=77 name=(null) inode=24954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=78 name=(null) inode=24952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=79 name=(null) inode=24955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=80 name=(null) inode=24952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=81 name=(null) inode=24956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=82 name=(null) inode=24952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=83 name=(null) inode=24957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=84 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=85 name=(null) inode=24958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=86 name=(null) inode=24958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=87 name=(null) inode=24959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=88 name=(null) inode=24958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=89 name=(null) inode=24960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=90 name=(null) inode=24958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=91 name=(null) inode=24961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=92 name=(null) inode=24958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=93 name=(null) inode=24962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=94 name=(null) inode=24958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=95 name=(null) inode=24963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=96 name=(null) inode=24943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=97 name=(null) inode=24964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=98 name=(null) inode=24964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=99 name=(null) inode=24965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=100 name=(null) inode=24964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=101 name=(null) inode=24966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=102 name=(null) inode=24964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=103 name=(null) inode=24967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=104 name=(null) inode=24964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=105 name=(null) inode=24968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=106 name=(null) inode=24964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=107 name=(null) inode=24969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PATH item=109 name=(null) inode=24970 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:08:20.885000 audit: PROCTITLE proctitle="(udev-worker)" May 15 01:08:20.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.908113 systemd-networkd[1141]: lo: Link UP May 15 01:08:20.908116 systemd-networkd[1141]: lo: Gained carrier May 15 01:08:20.908403 systemd-networkd[1141]: Enumeration completed May 15 01:08:20.908470 systemd-networkd[1141]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 15 01:08:20.908632 systemd[1]: Started systemd-networkd.service. May 15 01:08:20.917342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready May 15 01:08:20.916949 systemd-networkd[1141]: ens192: Link UP May 15 01:08:20.917113 systemd-networkd[1141]: ens192: Gained carrier May 15 01:08:20.920838 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 01:08:20.920887 kernel: Initialized host personality May 15 01:08:20.928793 kernel: mousedev: PS/2 mouse device common for all mice May 15 01:08:20.934051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 01:08:20.940316 (udev-worker)[1136]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 15 01:08:20.950039 systemd[1]: Finished systemd-udev-settle.service. May 15 01:08:20.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:20.951070 systemd[1]: Starting lvm2-activation-early.service... May 15 01:08:21.036235 lvm[1168]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 01:08:21.075485 systemd[1]: Finished lvm2-activation-early.service. May 15 01:08:21.075704 systemd[1]: Reached target cryptsetup.target. May 15 01:08:21.076919 systemd[1]: Starting lvm2-activation.service... May 15 01:08:21.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.080650 lvm[1170]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 01:08:21.104362 systemd[1]: Finished lvm2-activation.service. May 15 01:08:21.104560 systemd[1]: Reached target local-fs-pre.target. May 15 01:08:21.104690 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 01:08:21.104709 systemd[1]: Reached target local-fs.target. May 15 01:08:21.104837 systemd[1]: Reached target machines.target. May 15 01:08:21.106042 systemd[1]: Starting ldconfig.service... May 15 01:08:21.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.111563 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 01:08:21.111614 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:21.112668 systemd[1]: Starting systemd-boot-update.service... May 15 01:08:21.113656 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 01:08:21.114803 systemd[1]: Starting systemd-machine-id-commit.service... May 15 01:08:21.116072 systemd[1]: Starting systemd-sysext.service... May 15 01:08:21.121279 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1173 (bootctl) May 15 01:08:21.122006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 01:08:21.134989 systemd[1]: Unmounting usr-share-oem.mount... May 15 01:08:21.137261 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 01:08:21.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.137726 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 01:08:21.137857 systemd[1]: Unmounted usr-share-oem.mount. May 15 01:08:21.149752 kernel: loop0: detected capacity change from 0 to 210664 May 15 01:08:21.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.151977 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 01:08:21.152386 systemd[1]: Finished systemd-machine-id-commit.service. May 15 01:08:21.247745 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 01:08:21.265750 kernel: loop1: detected capacity change from 0 to 210664 May 15 01:08:21.285915 systemd-fsck[1186]: fsck.fat 4.2 (2021-01-31) May 15 01:08:21.285915 systemd-fsck[1186]: /dev/sda1: 790 files, 120690/258078 clusters May 15 01:08:21.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.286821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 01:08:21.287889 systemd[1]: Mounting boot.mount... May 15 01:08:21.289770 (sd-sysext)[1189]: Using extensions 'kubernetes'. May 15 01:08:21.290756 (sd-sysext)[1189]: Merged extensions into '/usr'. May 15 01:08:21.303857 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:08:21.304845 systemd[1]: Mounting usr-share-oem.mount... May 15 01:08:21.306414 systemd[1]: Starting modprobe@dm_mod.service... May 15 01:08:21.307176 systemd[1]: Starting modprobe@efi_pstore.service... May 15 01:08:21.307888 systemd[1]: Starting modprobe@loop.service... May 15 01:08:21.308014 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 01:08:21.308090 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:21.308167 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:08:21.308659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:08:21.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.308983 systemd[1]: Finished modprobe@dm_mod.service. May 15 01:08:21.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.310898 systemd[1]: Mounted usr-share-oem.mount. May 15 01:08:21.311150 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:08:21.311223 systemd[1]: Finished modprobe@loop.service. May 15 01:08:21.311507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 01:08:21.312678 systemd[1]: Finished systemd-sysext.service. May 15 01:08:21.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.312925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:08:21.313005 systemd[1]: Finished modprobe@efi_pstore.service. May 15 01:08:21.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.314391 systemd[1]: Starting ensure-sysext.service... May 15 01:08:21.314526 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:08:21.315337 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 01:08:21.323802 systemd[1]: Reloading. May 15 01:08:21.330588 systemd-tmpfiles[1206]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 01:08:21.332186 systemd-tmpfiles[1206]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 01:08:21.334319 systemd-tmpfiles[1206]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 01:08:21.361621 /usr/lib/systemd/system-generators/torcx-generator[1225]: time="2025-05-15T01:08:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 01:08:21.361644 /usr/lib/systemd/system-generators/torcx-generator[1225]: time="2025-05-15T01:08:21Z" level=info msg="torcx already run" May 15 01:08:21.430012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 01:08:21.430024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 01:08:21.441523 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:08:21.480767 systemd[1]: Mounted boot.mount. May 15 01:08:21.488104 systemd[1]: Starting modprobe@dm_mod.service... May 15 01:08:21.488921 systemd[1]: Starting modprobe@efi_pstore.service... May 15 01:08:21.489692 systemd[1]: Starting modprobe@loop.service... May 15 01:08:21.489903 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 01:08:21.489978 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:21.490375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:08:21.490764 systemd[1]: Finished modprobe@dm_mod.service. May 15 01:08:21.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.491277 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:08:21.491354 systemd[1]: Finished modprobe@loop.service. May 15 01:08:21.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.491835 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 01:08:21.493788 systemd[1]: Starting modprobe@dm_mod.service... May 15 01:08:21.494562 systemd[1]: Starting modprobe@loop.service... May 15 01:08:21.494807 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 01:08:21.494880 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:21.495240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:08:21.495331 systemd[1]: Finished modprobe@efi_pstore.service. May 15 01:08:21.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.495890 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:08:21.496014 systemd[1]: Finished modprobe@loop.service. May 15 01:08:21.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.496417 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:08:21.499079 systemd[1]: Starting modprobe@drm.service... May 15 01:08:21.499951 systemd[1]: Starting modprobe@efi_pstore.service... May 15 01:08:21.501148 systemd[1]: Starting modprobe@loop.service... May 15 01:08:21.501349 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 01:08:21.501428 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:21.502526 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 01:08:21.503377 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 01:08:21.503536 systemd[1]: Finished modprobe@drm.service. May 15 01:08:21.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.504057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 01:08:21.504182 systemd[1]: Finished modprobe@efi_pstore.service. May 15 01:08:21.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.504688 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 01:08:21.505742 systemd[1]: Finished ensure-sysext.service. May 15 01:08:21.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.506970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 01:08:21.507051 systemd[1]: Finished modprobe@dm_mod.service. May 15 01:08:21.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.509319 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 01:08:21.509527 systemd[1]: Finished modprobe@loop.service. May 15 01:08:21.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:21.509881 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 01:08:21.559060 systemd[1]: Finished systemd-boot-update.service. May 15 01:08:21.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:22.136669 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:08:22.136685 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 01:08:22.185395 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 01:08:22.186899 systemd[1]: Starting audit-rules.service... May 15 01:08:22.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:22.188223 systemd[1]: Starting clean-ca-certificates.service... May 15 01:08:22.189761 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 01:08:22.192145 systemd[1]: Starting systemd-resolved.service... May 15 01:08:22.194247 systemd[1]: Starting systemd-timesyncd.service... May 15 01:08:22.196395 systemd[1]: Starting systemd-update-utmp.service... May 15 01:08:22.197101 systemd[1]: Finished clean-ca-certificates.service. May 15 01:08:22.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:22.197451 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 01:08:22.208000 audit[1321]: SYSTEM_BOOT pid=1321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 01:08:22.211994 systemd[1]: Finished systemd-update-utmp.service. May 15 01:08:22.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:22.228402 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 01:08:22.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:08:22.249000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 01:08:22.249000 audit[1337]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcacf29bb0 a2=420 a3=0 items=0 ppid=1314 pid=1337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:08:22.249000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 01:08:22.251472 augenrules[1337]: No rules May 15 01:08:22.251628 systemd[1]: Finished audit-rules.service. May 15 01:08:22.263814 systemd[1]: Started systemd-timesyncd.service. May 15 01:08:22.264016 systemd[1]: Reached target time-set.target. May 15 01:08:22.272869 systemd-resolved[1318]: Positive Trust Anchors: May 15 01:08:22.272880 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 01:08:22.272900 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 01:08:22.353545 systemd-resolved[1318]: Defaulting to hostname 'linux'. May 15 01:08:22.354777 systemd[1]: Started systemd-resolved.service. May 15 01:08:22.354930 systemd[1]: Reached target network.target. May 15 01:08:22.355020 systemd[1]: Reached target nss-lookup.target. May 15 01:08:22.437950 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 01:08:22.454119 systemd[1]: Finished ldconfig.service. May 15 01:08:22.455461 systemd[1]: Starting systemd-update-done.service... May 15 01:08:22.459519 systemd[1]: Finished systemd-update-done.service. May 15 01:08:22.459688 systemd[1]: Reached target sysinit.target. May 15 01:08:22.459841 systemd[1]: Started motdgen.path. May 15 01:08:22.459943 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 01:08:22.460128 systemd[1]: Started logrotate.timer. May 15 01:08:22.460257 systemd[1]: Started mdadm.timer. May 15 01:08:22.460340 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 01:08:22.460432 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 01:08:22.460451 systemd[1]: Reached target paths.target. May 15 01:08:22.460532 systemd[1]: Reached target timers.target. May 15 01:08:22.460777 systemd[1]: Listening on dbus.socket. May 15 01:08:22.461657 systemd[1]: Starting docker.socket... May 15 01:08:22.467978 systemd[1]: Listening on sshd.socket. May 15 01:08:22.468120 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:22.468419 systemd[1]: Listening on docker.socket. May 15 01:08:22.468528 systemd[1]: Reached target sockets.target. May 15 01:08:22.468614 systemd[1]: Reached target basic.target. May 15 01:08:22.468800 systemd[1]: System is tainted: cgroupsv1 May 15 01:08:22.468829 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 01:08:22.468843 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 01:08:22.469575 systemd[1]: Starting containerd.service... May 15 01:08:22.470417 systemd[1]: Starting dbus.service... May 15 01:08:22.471277 systemd[1]: Starting enable-oem-cloudinit.service... May 15 01:08:22.472124 systemd[1]: Starting extend-filesystems.service... May 15 01:08:22.472397 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 01:08:22.473632 systemd[1]: Starting motdgen.service... May 15 01:08:22.475618 jq[1352]: false May 15 01:08:22.478634 systemd[1]: Starting prepare-helm.service... May 15 01:08:22.479496 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 01:08:22.480513 systemd[1]: Starting sshd-keygen.service... May 15 01:08:22.481852 systemd[1]: Starting systemd-logind.service... May 15 01:08:22.481959 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 01:08:22.481996 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 01:08:22.482907 systemd[1]: Starting update-engine.service... May 15 01:08:22.483707 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 01:08:22.484612 systemd[1]: Starting vmtoolsd.service... May 15 01:08:22.486329 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 01:08:22.495395 jq[1364]: true May 15 01:08:22.486454 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 01:08:22.487649 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 01:08:22.487787 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 01:08:22.506773 tar[1369]: linux-amd64/helm May 15 01:09:43.472032 systemd-resolved[1318]: Clock change detected. Flushing caches. May 15 01:09:43.472104 systemd-timesyncd[1319]: Contacted time server 96.231.54.40:123 (0.flatcar.pool.ntp.org). May 15 01:09:43.472135 systemd-timesyncd[1319]: Initial clock synchronization to Thu 2025-05-15 01:09:43.472002 UTC. May 15 01:09:43.482485 extend-filesystems[1353]: Found loop1 May 15 01:09:43.482485 extend-filesystems[1353]: Found sda May 15 01:09:43.482485 extend-filesystems[1353]: Found sda1 May 15 01:09:43.482485 extend-filesystems[1353]: Found sda2 May 15 01:09:43.483061 extend-filesystems[1353]: Found sda3 May 15 01:09:43.483061 extend-filesystems[1353]: Found usr May 15 01:09:43.483061 extend-filesystems[1353]: Found sda4 May 15 01:09:43.483061 extend-filesystems[1353]: Found sda6 May 15 01:09:43.483061 extend-filesystems[1353]: Found sda7 May 15 01:09:43.483061 extend-filesystems[1353]: Found sda9 May 15 01:09:43.483061 extend-filesystems[1353]: Checking size of /dev/sda9 May 15 01:09:43.491143 jq[1373]: true May 15 01:09:43.492910 systemd[1]: Started vmtoolsd.service. May 15 01:09:43.497673 systemd[1]: motdgen.service: Deactivated successfully. May 15 01:09:43.497801 systemd[1]: Finished motdgen.service. May 15 01:09:43.513903 extend-filesystems[1353]: Old size kept for /dev/sda9 May 15 01:09:43.514815 extend-filesystems[1353]: Found sr0 May 15 01:09:43.514432 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 01:09:43.514563 systemd[1]: Finished extend-filesystems.service. May 15 01:09:43.538060 env[1378]: time="2025-05-15T01:09:43.537408375Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 01:09:43.541348 bash[1408]: Updated "/home/core/.ssh/authorized_keys" May 15 01:09:43.542603 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 01:09:43.562319 systemd-networkd[1141]: ens192: Gained IPv6LL May 15 01:09:43.569073 kernel: NET: Registered PF_VSOCK protocol family May 15 01:09:43.564563 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 01:09:43.564729 systemd[1]: Reached target network-online.target. May 15 01:09:43.569577 dbus-daemon[1350]: [system] SELinux support is enabled May 15 01:09:43.566796 systemd[1]: Starting kubelet.service... May 15 01:09:43.580440 update_engine[1363]: I0515 01:09:43.579515 1363 main.cc:92] Flatcar Update Engine starting May 15 01:09:43.569697 systemd[1]: Started dbus.service. May 15 01:09:43.571593 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 01:09:43.571617 systemd[1]: Reached target system-config.target. May 15 01:09:43.571864 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 01:09:43.571880 systemd[1]: Reached target user-config.target. May 15 01:09:43.580330 systemd-logind[1361]: Watching system buttons on /dev/input/event1 (Power Button) May 15 01:09:43.580347 systemd-logind[1361]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 01:09:43.580542 systemd-logind[1361]: New seat seat0. May 15 01:09:43.582398 systemd[1]: Started systemd-logind.service. May 15 01:09:43.584980 update_engine[1363]: I0515 01:09:43.584821 1363 update_check_scheduler.cc:74] Next update check in 8m47s May 15 01:09:43.582665 systemd[1]: Started update-engine.service. May 15 01:09:43.584042 systemd[1]: Started locksmithd.service. May 15 01:09:43.606385 env[1378]: time="2025-05-15T01:09:43.605642769Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 01:09:43.606554 env[1378]: time="2025-05-15T01:09:43.606542270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612696674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612729851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612896265Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612908807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612917294Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612925291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.612982462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.613138057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.613243017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 01:09:43.613355 env[1378]: time="2025-05-15T01:09:43.613254849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 01:09:43.613586 env[1378]: time="2025-05-15T01:09:43.613290547Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 01:09:43.613586 env[1378]: time="2025-05-15T01:09:43.613301952Z" level=info msg="metadata content store policy set" policy=shared May 15 01:09:43.616872 env[1378]: time="2025-05-15T01:09:43.616850334Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 01:09:43.616925 env[1378]: time="2025-05-15T01:09:43.616874221Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 01:09:43.616925 env[1378]: time="2025-05-15T01:09:43.616888690Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 01:09:43.616925 env[1378]: time="2025-05-15T01:09:43.616908964Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616925 env[1378]: time="2025-05-15T01:09:43.616917270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616925263Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616932316Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616939909Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616947122Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616954155Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616961324Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 01:09:43.616993 env[1378]: time="2025-05-15T01:09:43.616971681Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 01:09:43.617109 env[1378]: time="2025-05-15T01:09:43.617031038Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 01:09:43.617109 env[1378]: time="2025-05-15T01:09:43.617079124Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617312859Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617332232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617341244Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617377004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617387978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617395708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617402224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617409432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617419569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617428770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617435238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617442594Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617510980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617520458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618015 env[1378]: time="2025-05-15T01:09:43.617528078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618804 env[1378]: time="2025-05-15T01:09:43.617534448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 01:09:43.618804 env[1378]: time="2025-05-15T01:09:43.617543913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 01:09:43.618804 env[1378]: time="2025-05-15T01:09:43.617549865Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 01:09:43.618804 env[1378]: time="2025-05-15T01:09:43.617561935Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 01:09:43.618804 env[1378]: time="2025-05-15T01:09:43.617587032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 01:09:43.618344 systemd[1]: Started containerd.service. May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.617728295Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.617763896Z" level=info msg="Connect containerd service" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.617785448Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618071850Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618195483Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618224692Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618277747Z" level=info msg="containerd successfully booted in 0.095471s" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618601112Z" level=info msg="Start subscribing containerd event" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618630566Z" level=info msg="Start recovering state" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618664408Z" level=info msg="Start event monitor" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618677529Z" level=info msg="Start snapshots syncer" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618682833Z" level=info msg="Start cni network conf syncer for default" May 15 01:09:43.618946 env[1378]: time="2025-05-15T01:09:43.618687484Z" level=info msg="Start streaming server" May 15 01:09:43.909216 tar[1369]: linux-amd64/LICENSE May 15 01:09:43.909216 tar[1369]: linux-amd64/README.md May 15 01:09:43.911951 systemd[1]: Finished prepare-helm.service. May 15 01:09:44.034413 sshd_keygen[1383]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 01:09:44.047450 systemd[1]: Finished sshd-keygen.service. May 15 01:09:44.048688 systemd[1]: Starting issuegen.service... May 15 01:09:44.052614 systemd[1]: issuegen.service: Deactivated successfully. May 15 01:09:44.052723 systemd[1]: Finished issuegen.service. May 15 01:09:44.053835 systemd[1]: Starting systemd-user-sessions.service... May 15 01:09:44.063116 systemd[1]: Finished systemd-user-sessions.service. May 15 01:09:44.064091 systemd[1]: Started getty@tty1.service. May 15 01:09:44.064950 systemd[1]: Started serial-getty@ttyS0.service. May 15 01:09:44.065151 systemd[1]: Reached target getty.target. May 15 01:09:44.176545 locksmithd[1426]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 01:09:45.665365 systemd[1]: Started kubelet.service. May 15 01:09:45.665831 systemd[1]: Reached target multi-user.target. May 15 01:09:45.667060 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 01:09:45.672726 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 01:09:45.672861 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 01:09:45.677485 systemd[1]: Startup finished in 8.433s (kernel) + 9.619s (userspace) = 18.053s. May 15 01:09:45.748362 login[1495]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 01:09:45.749391 login[1496]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 01:09:45.758576 systemd[1]: Created slice user-500.slice. May 15 01:09:45.759299 systemd[1]: Starting user-runtime-dir@500.service... May 15 01:09:45.763642 systemd-logind[1361]: New session 2 of user core. May 15 01:09:45.766094 systemd-logind[1361]: New session 1 of user core. May 15 01:09:45.769016 systemd[1]: Finished user-runtime-dir@500.service. May 15 01:09:45.769869 systemd[1]: Starting user@500.service... May 15 01:09:45.773526 (systemd)[1510]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 01:09:45.849322 systemd[1510]: Queued start job for default target default.target. May 15 01:09:45.849473 systemd[1510]: Reached target paths.target. May 15 01:09:45.849485 systemd[1510]: Reached target sockets.target. May 15 01:09:45.849493 systemd[1510]: Reached target timers.target. May 15 01:09:45.849508 systemd[1510]: Reached target basic.target. May 15 01:09:45.849569 systemd[1510]: Reached target default.target. May 15 01:09:45.849589 systemd[1510]: Startup finished in 72ms. May 15 01:09:45.849612 systemd[1]: Started user@500.service. May 15 01:09:45.850243 systemd[1]: Started session-1.scope. May 15 01:09:45.850622 systemd[1]: Started session-2.scope. May 15 01:09:47.083065 kubelet[1504]: E0515 01:09:47.083040 1504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:09:47.084216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:09:47.084312 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:09:57.159753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 01:09:57.159924 systemd[1]: Stopped kubelet.service. May 15 01:09:57.161175 systemd[1]: Starting kubelet.service... May 15 01:09:57.381064 systemd[1]: Started kubelet.service. May 15 01:09:57.449291 kubelet[1546]: E0515 01:09:57.449206 1546 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:09:57.451696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:09:57.451790 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:10:07.659622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 01:10:07.659753 systemd[1]: Stopped kubelet.service. May 15 01:10:07.660891 systemd[1]: Starting kubelet.service... May 15 01:10:07.713617 systemd[1]: Started kubelet.service. May 15 01:10:07.758217 kubelet[1561]: E0515 01:10:07.758185 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:10:07.759407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:10:07.759491 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:10:09.884177 systemd[1]: Created slice system-sshd.slice. May 15 01:10:09.884951 systemd[1]: Started sshd@0-139.178.70.104:22-14.29.198.130:47468.service. May 15 01:10:10.736821 sshd[1569]: Invalid user gabriella from 14.29.198.130 port 47468 May 15 01:10:10.739268 sshd[1569]: pam_faillock(sshd:auth): User unknown May 15 01:10:10.739812 sshd[1569]: pam_unix(sshd:auth): check pass; user unknown May 15 01:10:10.739876 sshd[1569]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.29.198.130 May 15 01:10:10.740255 sshd[1569]: pam_faillock(sshd:auth): User unknown May 15 01:10:13.014004 sshd[1569]: Failed password for invalid user gabriella from 14.29.198.130 port 47468 ssh2 May 15 01:10:13.686035 systemd[1]: Started sshd@1-139.178.70.104:22-147.75.109.163:33300.service. May 15 01:10:13.724211 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 33300 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:13.725011 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:13.727757 systemd-logind[1361]: New session 3 of user core. May 15 01:10:13.728043 systemd[1]: Started session-3.scope. May 15 01:10:13.775105 systemd[1]: Started sshd@2-139.178.70.104:22-147.75.109.163:33310.service. May 15 01:10:13.815015 sshd[1576]: Accepted publickey for core from 147.75.109.163 port 33310 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:13.815761 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:13.817913 systemd-logind[1361]: New session 4 of user core. May 15 01:10:13.818440 systemd[1]: Started session-4.scope. May 15 01:10:13.869801 systemd[1]: Started sshd@3-139.178.70.104:22-147.75.109.163:33318.service. May 15 01:10:13.870640 sshd[1576]: pam_unix(sshd:session): session closed for user core May 15 01:10:13.872018 systemd[1]: sshd@2-139.178.70.104:22-147.75.109.163:33310.service: Deactivated successfully. May 15 01:10:13.872814 systemd[1]: session-4.scope: Deactivated successfully. May 15 01:10:13.873096 systemd-logind[1361]: Session 4 logged out. Waiting for processes to exit. May 15 01:10:13.873788 systemd-logind[1361]: Removed session 4. May 15 01:10:13.907711 sshd[1581]: Accepted publickey for core from 147.75.109.163 port 33318 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:13.908555 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:13.911529 systemd[1]: Started session-5.scope. May 15 01:10:13.911799 systemd-logind[1361]: New session 5 of user core. May 15 01:10:13.960417 sshd[1581]: pam_unix(sshd:session): session closed for user core May 15 01:10:13.962486 systemd[1]: Started sshd@4-139.178.70.104:22-147.75.109.163:33326.service. May 15 01:10:13.965462 systemd[1]: sshd@3-139.178.70.104:22-147.75.109.163:33318.service: Deactivated successfully. May 15 01:10:13.965970 systemd[1]: session-5.scope: Deactivated successfully. May 15 01:10:13.967055 systemd-logind[1361]: Session 5 logged out. Waiting for processes to exit. May 15 01:10:13.967767 systemd-logind[1361]: Removed session 5. May 15 01:10:14.003114 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 33326 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:14.004038 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:14.007370 systemd[1]: Started session-6.scope. May 15 01:10:14.007604 systemd-logind[1361]: New session 6 of user core. May 15 01:10:14.058711 sshd[1588]: pam_unix(sshd:session): session closed for user core May 15 01:10:14.060628 systemd[1]: Started sshd@5-139.178.70.104:22-147.75.109.163:33334.service. May 15 01:10:14.062224 systemd[1]: sshd@4-139.178.70.104:22-147.75.109.163:33326.service: Deactivated successfully. May 15 01:10:14.063019 systemd[1]: session-6.scope: Deactivated successfully. May 15 01:10:14.063290 systemd-logind[1361]: Session 6 logged out. Waiting for processes to exit. May 15 01:10:14.063797 systemd-logind[1361]: Removed session 6. May 15 01:10:14.100181 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 33334 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:14.101511 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:14.105105 systemd[1]: Started session-7.scope. May 15 01:10:14.105367 systemd-logind[1361]: New session 7 of user core. May 15 01:10:14.191954 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 01:10:14.192458 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 01:10:14.200307 dbus-daemon[1350]: н\xca\xe79V: received setenforce notice (enforcing=325573984) May 15 01:10:14.200823 sudo[1601]: pam_unix(sudo:session): session closed for user root May 15 01:10:14.203377 sshd[1595]: pam_unix(sshd:session): session closed for user core May 15 01:10:14.204925 systemd[1]: Started sshd@6-139.178.70.104:22-147.75.109.163:33342.service. May 15 01:10:14.206813 systemd[1]: sshd@5-139.178.70.104:22-147.75.109.163:33334.service: Deactivated successfully. May 15 01:10:14.207801 systemd[1]: session-7.scope: Deactivated successfully. May 15 01:10:14.208190 systemd-logind[1361]: Session 7 logged out. Waiting for processes to exit. May 15 01:10:14.209226 systemd-logind[1361]: Removed session 7. May 15 01:10:14.245549 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 33342 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:14.246565 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:14.249217 systemd[1]: Started session-8.scope. May 15 01:10:14.249933 systemd-logind[1361]: New session 8 of user core. May 15 01:10:14.299618 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 01:10:14.299787 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 01:10:14.302150 sudo[1610]: pam_unix(sudo:session): session closed for user root May 15 01:10:14.305662 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 01:10:14.306018 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 01:10:14.313010 systemd[1]: Stopping audit-rules.service... May 15 01:10:14.314000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 15 01:10:14.315359 auditctl[1613]: No rules May 15 01:10:14.315675 systemd[1]: audit-rules.service: Deactivated successfully. May 15 01:10:14.315827 systemd[1]: Stopped audit-rules.service. May 15 01:10:14.317810 systemd[1]: Starting audit-rules.service... May 15 01:10:14.318222 kernel: kauditd_printk_skb: 177 callbacks suppressed May 15 01:10:14.318276 kernel: audit: type=1305 audit(1747271414.314:154): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 15 01:10:14.314000 audit[1613]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6efba640 a2=420 a3=0 items=0 ppid=1 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.324816 kernel: audit: type=1300 audit(1747271414.314:154): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6efba640 a2=420 a3=0 items=0 ppid=1 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.314000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 15 01:10:14.326322 kernel: audit: type=1327 audit(1747271414.314:154): proctitle=2F7362696E2F617564697463746C002D44 May 15 01:10:14.326356 kernel: audit: type=1131 audit(1747271414.315:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.336114 augenrules[1631]: No rules May 15 01:10:14.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.336000 audit[1609]: USER_END pid=1609 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.336939 sudo[1609]: pam_unix(sudo:session): session closed for user root May 15 01:10:14.336532 systemd[1]: Finished audit-rules.service. May 15 01:10:14.339557 systemd[1]: Started sshd@7-139.178.70.104:22-147.75.109.163:33346.service. May 15 01:10:14.342690 kernel: audit: type=1130 audit(1747271414.336:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.342721 kernel: audit: type=1106 audit(1747271414.336:157): pid=1609 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.344370 sshd[1603]: pam_unix(sshd:session): session closed for user core May 15 01:10:14.346428 systemd[1]: sshd@6-139.178.70.104:22-147.75.109.163:33342.service: Deactivated successfully. May 15 01:10:14.346872 systemd[1]: session-8.scope: Deactivated successfully. May 15 01:10:14.347632 systemd-logind[1361]: Session 8 logged out. Waiting for processes to exit. May 15 01:10:14.348737 systemd-logind[1361]: Removed session 8. May 15 01:10:14.336000 audit[1609]: CRED_DISP pid=1609 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.353259 kernel: audit: type=1104 audit(1747271414.336:158): pid=1609 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.104:22-147.75.109.163:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.345000 audit[1603]: USER_END pid=1603 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.361154 kernel: audit: type=1130 audit(1747271414.339:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.104:22-147.75.109.163:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.361199 kernel: audit: type=1106 audit(1747271414.345:160): pid=1603 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.345000 audit[1603]: CRED_DISP pid=1603 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.365249 kernel: audit: type=1104 audit(1747271414.345:161): pid=1603 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.104:22-147.75.109.163:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.385000 audit[1636]: USER_ACCT pid=1636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.385572 sshd[1636]: Accepted publickey for core from 147.75.109.163 port 33346 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:10:14.386000 audit[1636]: CRED_ACQ pid=1636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.386000 audit[1636]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff90f600d0 a2=3 a3=0 items=0 ppid=1 pid=1636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.386000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:10:14.386503 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:10:14.389067 systemd[1]: Started session-9.scope. May 15 01:10:14.389252 systemd-logind[1361]: New session 9 of user core. May 15 01:10:14.391000 audit[1636]: USER_START pid=1636 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.392000 audit[1641]: CRED_ACQ pid=1641 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:14.406737 sshd[1569]: Received disconnect from 14.29.198.130 port 47468:11: Bye Bye [preauth] May 15 01:10:14.406737 sshd[1569]: Disconnected from invalid user gabriella 14.29.198.130 port 47468 [preauth] May 15 01:10:14.407633 systemd[1]: sshd@0-139.178.70.104:22-14.29.198.130:47468.service: Deactivated successfully. May 15 01:10:14.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@0-139.178.70.104:22-14.29.198.130:47468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:14.438000 audit[1644]: USER_ACCT pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.438405 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 01:10:14.438000 audit[1644]: CRED_REFR pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.438581 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 01:10:14.439000 audit[1644]: USER_START pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:14.466145 systemd[1]: Starting docker.service... May 15 01:10:14.495306 env[1654]: time="2025-05-15T01:10:14.495270110Z" level=info msg="Starting up" May 15 01:10:14.497267 env[1654]: time="2025-05-15T01:10:14.496354347Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 01:10:14.497267 env[1654]: time="2025-05-15T01:10:14.496369288Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 01:10:14.497267 env[1654]: time="2025-05-15T01:10:14.496386548Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 01:10:14.497267 env[1654]: time="2025-05-15T01:10:14.496396057Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 01:10:14.498298 env[1654]: time="2025-05-15T01:10:14.498287554Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 01:10:14.498356 env[1654]: time="2025-05-15T01:10:14.498346897Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 01:10:14.498403 env[1654]: time="2025-05-15T01:10:14.498393368Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 01:10:14.498446 env[1654]: time="2025-05-15T01:10:14.498437565Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 01:10:14.502082 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4107566984-merged.mount: Deactivated successfully. May 15 01:10:14.751538 env[1654]: time="2025-05-15T01:10:14.751481637Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 15 01:10:14.751538 env[1654]: time="2025-05-15T01:10:14.751501881Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 15 01:10:14.751941 env[1654]: time="2025-05-15T01:10:14.751926723Z" level=info msg="Loading containers: start." May 15 01:10:14.793000 audit[1684]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.793000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc06cb6a40 a2=0 a3=7ffc06cb6a2c items=0 ppid=1654 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.793000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 15 01:10:14.795000 audit[1686]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.795000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedd035930 a2=0 a3=7ffedd03591c items=0 ppid=1654 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.795000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 15 01:10:14.796000 audit[1688]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.796000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde21ee500 a2=0 a3=7ffde21ee4ec items=0 ppid=1654 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.796000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 15 01:10:14.797000 audit[1690]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.797000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe826cd3a0 a2=0 a3=7ffe826cd38c items=0 ppid=1654 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.797000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 15 01:10:14.798000 audit[1692]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1692 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.798000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2bdce3f0 a2=0 a3=7ffd2bdce3dc items=0 ppid=1654 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.798000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 15 01:10:14.819000 audit[1697]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.819000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffcc41f270 a2=0 a3=7fffcc41f25c items=0 ppid=1654 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.819000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 15 01:10:14.841000 audit[1699]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.841000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3ed4f8e0 a2=0 a3=7fff3ed4f8cc items=0 ppid=1654 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.841000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 15 01:10:14.842000 audit[1701]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1701 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.842000 audit[1701]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff3d49a090 a2=0 a3=7fff3d49a07c items=0 ppid=1654 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 15 01:10:14.843000 audit[1703]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.843000 audit[1703]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fffa2607d20 a2=0 a3=7fffa2607d0c items=0 ppid=1654 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.843000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 01:10:14.861000 audit[1707]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1707 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.861000 audit[1707]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcd518f3f0 a2=0 a3=7ffcd518f3dc items=0 ppid=1654 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.861000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 15 01:10:14.866000 audit[1708]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.866000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff5019c900 a2=0 a3=7fff5019c8ec items=0 ppid=1654 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.866000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 01:10:14.875254 kernel: Initializing XFRM netlink socket May 15 01:10:14.901439 env[1654]: time="2025-05-15T01:10:14.901414728Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 01:10:14.916000 audit[1716]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.916000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffbfa054a0 a2=0 a3=7fffbfa0548c items=0 ppid=1654 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.916000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 15 01:10:14.931000 audit[1719]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.931000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcdbc3d840 a2=0 a3=7ffcdbc3d82c items=0 ppid=1654 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.931000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 15 01:10:14.933000 audit[1722]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.933000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe644afff0 a2=0 a3=7ffe644affdc items=0 ppid=1654 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.933000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 15 01:10:14.934000 audit[1724]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1724 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.934000 audit[1724]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe32b1d860 a2=0 a3=7ffe32b1d84c items=0 ppid=1654 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.934000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 15 01:10:14.935000 audit[1726]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1726 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.935000 audit[1726]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe1d62f680 a2=0 a3=7ffe1d62f66c items=0 ppid=1654 pid=1726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.935000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 15 01:10:14.937000 audit[1728]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1728 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.937000 audit[1728]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff5625fc80 a2=0 a3=7fff5625fc6c items=0 ppid=1654 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.937000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 15 01:10:14.938000 audit[1730]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1730 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.938000 audit[1730]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffdcf215860 a2=0 a3=7ffdcf21584c items=0 ppid=1654 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.938000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 15 01:10:14.974000 audit[1733]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1733 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.974000 audit[1733]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffea6c62f60 a2=0 a3=7ffea6c62f4c items=0 ppid=1654 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.974000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 15 01:10:14.975000 audit[1735]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1735 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.975000 audit[1735]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc2da70780 a2=0 a3=7ffc2da7076c items=0 ppid=1654 pid=1735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.975000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 15 01:10:14.977000 audit[1737]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1737 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.977000 audit[1737]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd6bfccd40 a2=0 a3=7ffd6bfccd2c items=0 ppid=1654 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.977000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 15 01:10:14.978000 audit[1739]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1739 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:14.978000 audit[1739]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdd0544990 a2=0 a3=7ffdd054497c items=0 ppid=1654 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:14.978000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 15 01:10:14.978769 systemd-networkd[1141]: docker0: Link UP May 15 01:10:15.020000 audit[1743]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1743 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:15.020000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc31967bf0 a2=0 a3=7ffc31967bdc items=0 ppid=1654 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:15.020000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 15 01:10:15.025000 audit[1744]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1744 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:15.025000 audit[1744]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe41526d50 a2=0 a3=7ffe41526d3c items=0 ppid=1654 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:15.025000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 01:10:15.026603 env[1654]: time="2025-05-15T01:10:15.026578885Z" level=info msg="Loading containers: done." May 15 01:10:15.037948 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1890697368-merged.mount: Deactivated successfully. May 15 01:10:15.055972 env[1654]: time="2025-05-15T01:10:15.055948736Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 01:10:15.056255 env[1654]: time="2025-05-15T01:10:15.056218780Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 01:10:15.056402 env[1654]: time="2025-05-15T01:10:15.056378415Z" level=info msg="Daemon has completed initialization" May 15 01:10:15.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:15.075347 systemd[1]: Started docker.service. May 15 01:10:15.075792 env[1654]: time="2025-05-15T01:10:15.075716014Z" level=info msg="API listen on /run/docker.sock" May 15 01:10:16.291865 env[1378]: time="2025-05-15T01:10:16.291691511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 01:10:16.819797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001352042.mount: Deactivated successfully. May 15 01:10:17.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:17.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:17.909563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 01:10:17.909692 systemd[1]: Stopped kubelet.service. May 15 01:10:17.910826 systemd[1]: Starting kubelet.service... May 15 01:10:17.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:17.963724 systemd[1]: Started kubelet.service. May 15 01:10:18.006361 kubelet[1790]: E0515 01:10:18.006338 1790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:10:18.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 01:10:18.008063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:10:18.008149 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:10:18.193208 env[1378]: time="2025-05-15T01:10:18.193139142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:18.207251 env[1378]: time="2025-05-15T01:10:18.207089776Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:18.214332 env[1378]: time="2025-05-15T01:10:18.214320222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:18.222584 env[1378]: time="2025-05-15T01:10:18.222564741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:18.222756 env[1378]: time="2025-05-15T01:10:18.222738109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 01:10:18.228123 env[1378]: time="2025-05-15T01:10:18.228094665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 01:10:19.744957 env[1378]: time="2025-05-15T01:10:19.744925860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:19.745624 env[1378]: time="2025-05-15T01:10:19.745608469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:19.746549 env[1378]: time="2025-05-15T01:10:19.746535334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:19.747541 env[1378]: time="2025-05-15T01:10:19.747527612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:19.748393 env[1378]: time="2025-05-15T01:10:19.748370466Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 01:10:19.755028 env[1378]: time="2025-05-15T01:10:19.755008716Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 01:10:21.024749 env[1378]: time="2025-05-15T01:10:21.024714581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:21.032879 env[1378]: time="2025-05-15T01:10:21.032863230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:21.036954 env[1378]: time="2025-05-15T01:10:21.036938366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:21.043143 env[1378]: time="2025-05-15T01:10:21.043129602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:21.043564 env[1378]: time="2025-05-15T01:10:21.043543550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 01:10:21.049047 env[1378]: time="2025-05-15T01:10:21.049024219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 01:10:22.025646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562006928.mount: Deactivated successfully. May 15 01:10:22.605292 env[1378]: time="2025-05-15T01:10:22.605260364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:22.633634 env[1378]: time="2025-05-15T01:10:22.633611379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:22.643510 env[1378]: time="2025-05-15T01:10:22.643484388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:22.653459 env[1378]: time="2025-05-15T01:10:22.653436541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:22.653817 env[1378]: time="2025-05-15T01:10:22.653799119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 01:10:22.661450 env[1378]: time="2025-05-15T01:10:22.661423084Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 01:10:23.154407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354169498.mount: Deactivated successfully. May 15 01:10:23.894819 env[1378]: time="2025-05-15T01:10:23.894789694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:23.895698 env[1378]: time="2025-05-15T01:10:23.895682712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:23.896635 env[1378]: time="2025-05-15T01:10:23.896623884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:23.897581 env[1378]: time="2025-05-15T01:10:23.897567126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:23.898099 env[1378]: time="2025-05-15T01:10:23.898084337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 01:10:23.904351 env[1378]: time="2025-05-15T01:10:23.904337027Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 01:10:24.389510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852250119.mount: Deactivated successfully. May 15 01:10:24.391551 env[1378]: time="2025-05-15T01:10:24.391532314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:24.392499 env[1378]: time="2025-05-15T01:10:24.392488522Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:24.393478 env[1378]: time="2025-05-15T01:10:24.393467517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:24.394348 env[1378]: time="2025-05-15T01:10:24.394335986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:24.394706 env[1378]: time="2025-05-15T01:10:24.394691124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 01:10:24.400471 env[1378]: time="2025-05-15T01:10:24.400456094Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 01:10:24.842015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924740869.mount: Deactivated successfully. May 15 01:10:26.995404 env[1378]: time="2025-05-15T01:10:26.995365744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:26.997606 env[1378]: time="2025-05-15T01:10:26.997574102Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:26.999119 env[1378]: time="2025-05-15T01:10:26.999101689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:27.000739 env[1378]: time="2025-05-15T01:10:27.000714024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:27.001016 env[1378]: time="2025-05-15T01:10:27.000992971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 01:10:28.159620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 01:10:28.159742 systemd[1]: Stopped kubelet.service. May 15 01:10:28.161055 systemd[1]: Starting kubelet.service... May 15 01:10:28.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:28.162383 kernel: kauditd_printk_skb: 89 callbacks suppressed May 15 01:10:28.162422 kernel: audit: type=1130 audit(1747271428.158:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:28.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:28.170244 kernel: audit: type=1131 audit(1747271428.158:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:28.638287 update_engine[1363]: I0515 01:10:28.638221 1363 update_attempter.cc:509] Updating boot flags... May 15 01:10:28.752516 systemd[1]: Started kubelet.service. May 15 01:10:28.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:28.756248 kernel: audit: type=1130 audit(1747271428.751:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:28.803607 kubelet[1913]: E0515 01:10:28.803580 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 01:10:28.804445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 01:10:28.804530 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 01:10:28.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 01:10:28.808246 kernel: audit: type=1131 audit(1747271428.803:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 01:10:29.205512 systemd[1]: Stopped kubelet.service. May 15 01:10:29.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:29.206942 systemd[1]: Starting kubelet.service... May 15 01:10:29.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:29.212457 kernel: audit: type=1130 audit(1747271429.204:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:29.212502 kernel: audit: type=1131 audit(1747271429.204:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:29.226967 systemd[1]: Reloading. May 15 01:10:29.279113 /usr/lib/systemd/system-generators/torcx-generator[1947]: time="2025-05-15T01:10:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 01:10:29.279346 /usr/lib/systemd/system-generators/torcx-generator[1947]: time="2025-05-15T01:10:29Z" level=info msg="torcx already run" May 15 01:10:29.344603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 01:10:29.344717 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 01:10:29.356700 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:10:29.412593 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 01:10:29.412749 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 01:10:29.413069 systemd[1]: Stopped kubelet.service. May 15 01:10:29.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 01:10:29.416249 kernel: audit: type=1130 audit(1747271429.411:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 01:10:29.417643 systemd[1]: Starting kubelet.service... May 15 01:10:29.822444 kernel: audit: type=1130 audit(1747271429.818:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:29.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:29.819109 systemd[1]: Started kubelet.service. May 15 01:10:29.898692 kubelet[2022]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:10:29.898928 kubelet[2022]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 01:10:29.898968 kubelet[2022]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:10:29.899046 kubelet[2022]: I0515 01:10:29.899031 2022 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 01:10:30.027950 kubelet[2022]: I0515 01:10:30.027929 2022 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 01:10:30.028050 kubelet[2022]: I0515 01:10:30.028042 2022 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 01:10:30.028225 kubelet[2022]: I0515 01:10:30.028217 2022 server.go:927] "Client rotation is on, will bootstrap in background" May 15 01:10:30.154908 kubelet[2022]: I0515 01:10:30.154496 2022 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:10:30.159337 kubelet[2022]: E0515 01:10:30.159322 2022 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.185924 kubelet[2022]: I0515 01:10:30.185906 2022 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 01:10:30.214922 kubelet[2022]: I0515 01:10:30.214887 2022 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 01:10:30.215179 kubelet[2022]: I0515 01:10:30.214921 2022 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 01:10:30.215276 kubelet[2022]: I0515 01:10:30.215185 2022 topology_manager.go:138] "Creating topology manager with none policy" May 15 01:10:30.215276 kubelet[2022]: I0515 01:10:30.215193 2022 container_manager_linux.go:301] "Creating device plugin manager" May 15 01:10:30.215321 kubelet[2022]: I0515 01:10:30.215281 2022 state_mem.go:36] "Initialized new in-memory state store" May 15 01:10:30.222726 kubelet[2022]: I0515 01:10:30.222711 2022 kubelet.go:400] "Attempting to sync node with API server" May 15 01:10:30.222726 kubelet[2022]: I0515 01:10:30.222728 2022 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 01:10:30.222806 kubelet[2022]: I0515 01:10:30.222743 2022 kubelet.go:312] "Adding apiserver pod source" May 15 01:10:30.222806 kubelet[2022]: I0515 01:10:30.222752 2022 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 01:10:30.246344 kubelet[2022]: W0515 01:10:30.246294 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.246433 kubelet[2022]: E0515 01:10:30.246355 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.268949 kubelet[2022]: W0515 01:10:30.268915 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.269007 kubelet[2022]: E0515 01:10:30.268951 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.269345 kubelet[2022]: I0515 01:10:30.269329 2022 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 01:10:30.322659 kubelet[2022]: I0515 01:10:30.322631 2022 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 01:10:30.322761 kubelet[2022]: W0515 01:10:30.322685 2022 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 01:10:30.323176 kubelet[2022]: I0515 01:10:30.323160 2022 server.go:1264] "Started kubelet" May 15 01:10:30.441399 kubelet[2022]: I0515 01:10:30.441370 2022 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 01:10:30.442306 kubelet[2022]: I0515 01:10:30.442294 2022 server.go:455] "Adding debug handlers to kubelet server" May 15 01:10:30.443036 kubelet[2022]: I0515 01:10:30.442991 2022 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 01:10:30.443207 kubelet[2022]: I0515 01:10:30.443192 2022 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 01:10:30.449191 kernel: audit: type=1400 audit(1747271430.442:209): avc: denied { mac_admin } for pid=2022 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:30.449267 kernel: audit: type=1401 audit(1747271430.442:209): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:30.442000 audit[2022]: AVC avc: denied { mac_admin } for pid=2022 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:30.442000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:30.442000 audit[2022]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009cec30 a1=c00077afa8 a2=c0009ceb70 a3=25 items=0 ppid=1 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:30.443000 audit[2022]: AVC avc: denied { mac_admin } for pid=2022 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:30.443000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:30.443000 audit[2022]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008de4c0 a1=c00077afc0 a2=c0009cecc0 a3=25 items=0 ppid=1 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.443000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:30.449535 kubelet[2022]: E0515 01:10:30.444505 2022 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8e1bc29973ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 01:10:30.323139502 +0000 UTC m=+0.497963513,LastTimestamp:2025-05-15 01:10:30.323139502 +0000 UTC m=+0.497963513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 01:10:30.449535 kubelet[2022]: I0515 01:10:30.445273 2022 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 15 01:10:30.449535 kubelet[2022]: I0515 01:10:30.445305 2022 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 15 01:10:30.449535 kubelet[2022]: I0515 01:10:30.445367 2022 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 01:10:30.449535 kubelet[2022]: I0515 01:10:30.447670 2022 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 01:10:30.449535 kubelet[2022]: I0515 01:10:30.447742 2022 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 01:10:30.449535 kubelet[2022]: I0515 01:10:30.447776 2022 reconciler.go:26] "Reconciler: start to sync state" May 15 01:10:30.449535 kubelet[2022]: E0515 01:10:30.448211 2022 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" May 15 01:10:30.449709 kubelet[2022]: W0515 01:10:30.448267 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.449709 kubelet[2022]: E0515 01:10:30.448295 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.449000 audit[2033]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.449000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc360c950 a2=0 a3=7ffcc360c93c items=0 ppid=2022 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 15 01:10:30.450868 kubelet[2022]: I0515 01:10:30.450783 2022 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 01:10:30.450000 audit[2034]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.450000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7b881da0 a2=0 a3=7fff7b881d8c items=0 ppid=2022 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 15 01:10:30.453249 kubelet[2022]: E0515 01:10:30.452448 2022 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 01:10:30.453249 kubelet[2022]: I0515 01:10:30.452595 2022 factory.go:221] Registration of the containerd container factory successfully May 15 01:10:30.453249 kubelet[2022]: I0515 01:10:30.452600 2022 factory.go:221] Registration of the systemd container factory successfully May 15 01:10:30.454000 audit[2036]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.454000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd35a3a3e0 a2=0 a3=7ffd35a3a3cc items=0 ppid=2022 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 01:10:30.456000 audit[2038]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.456000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff59f04da0 a2=0 a3=7fff59f04d8c items=0 ppid=2022 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 01:10:30.465000 audit[2041]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.465000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd3d147890 a2=0 a3=7ffd3d14787c items=0 ppid=2022 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 15 01:10:30.467470 kubelet[2022]: I0515 01:10:30.467448 2022 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 01:10:30.466000 audit[2042]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:30.466000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff23717650 a2=0 a3=7fff2371763c items=0 ppid=2022 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 15 01:10:30.468814 kubelet[2022]: I0515 01:10:30.468805 2022 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 01:10:30.468869 kubelet[2022]: I0515 01:10:30.468862 2022 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 01:10:30.468932 kubelet[2022]: I0515 01:10:30.468925 2022 kubelet.go:2337] "Starting kubelet main sync loop" May 15 01:10:30.469003 kubelet[2022]: E0515 01:10:30.468993 2022 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 01:10:30.468000 audit[2043]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.468000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff76f4b200 a2=0 a3=7fff76f4b1ec items=0 ppid=2022 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 15 01:10:30.469000 audit[2044]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.469000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc84615a90 a2=0 a3=7ffc84615a7c items=0 ppid=2022 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 15 01:10:30.469000 audit[2045]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:30.469000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb5ec4770 a2=0 a3=7ffdb5ec475c items=0 ppid=2022 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 15 01:10:30.470000 audit[2046]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:30.470000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd8276620 a2=0 a3=7fffd827660c items=0 ppid=2022 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 15 01:10:30.473552 kubelet[2022]: W0515 01:10:30.473529 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.473619 kubelet[2022]: E0515 01:10:30.473610 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:30.472000 audit[2049]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:30.472000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffffe9edd20 a2=0 a3=7ffffe9edd0c items=0 ppid=2022 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 15 01:10:30.473000 audit[2050]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:30.473000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc90948f00 a2=0 a3=7ffc90948eec items=0 ppid=2022 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 15 01:10:30.476055 kubelet[2022]: I0515 01:10:30.476046 2022 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 01:10:30.476111 kubelet[2022]: I0515 01:10:30.476103 2022 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 01:10:30.476159 kubelet[2022]: I0515 01:10:30.476153 2022 state_mem.go:36] "Initialized new in-memory state store" May 15 01:10:30.477058 kubelet[2022]: I0515 01:10:30.477051 2022 policy_none.go:49] "None policy: Start" May 15 01:10:30.477344 kubelet[2022]: I0515 01:10:30.477337 2022 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 01:10:30.477396 kubelet[2022]: I0515 01:10:30.477390 2022 state_mem.go:35] "Initializing new in-memory state store" May 15 01:10:30.480172 kubelet[2022]: I0515 01:10:30.480162 2022 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 01:10:30.478000 audit[2022]: AVC avc: denied { mac_admin } for pid=2022 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:30.478000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:30.478000 audit[2022]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c84d50 a1=c0004fa558 a2=c000c84d20 a3=25 items=0 ppid=1 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:30.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:30.480433 kubelet[2022]: I0515 01:10:30.480423 2022 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 15 01:10:30.480545 kubelet[2022]: I0515 01:10:30.480526 2022 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 01:10:30.480643 kubelet[2022]: I0515 01:10:30.480636 2022 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 01:10:30.483217 kubelet[2022]: E0515 01:10:30.483207 2022 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 01:10:30.549405 kubelet[2022]: I0515 01:10:30.549382 2022 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 01:10:30.549608 kubelet[2022]: E0515 01:10:30.549596 2022 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 15 01:10:30.569989 kubelet[2022]: I0515 01:10:30.569966 2022 topology_manager.go:215] "Topology Admit Handler" podUID="0ea44c54d4d7506222101f069816bc4b" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 01:10:30.570842 kubelet[2022]: I0515 01:10:30.570826 2022 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 01:10:30.571444 kubelet[2022]: I0515 01:10:30.571430 2022 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 01:10:30.648553 kubelet[2022]: E0515 01:10:30.648532 2022 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" May 15 01:10:30.648653 kubelet[2022]: I0515 01:10:30.648588 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ea44c54d4d7506222101f069816bc4b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea44c54d4d7506222101f069816bc4b\") " pod="kube-system/kube-apiserver-localhost" May 15 01:10:30.648653 kubelet[2022]: I0515 01:10:30.648601 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ea44c54d4d7506222101f069816bc4b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea44c54d4d7506222101f069816bc4b\") " pod="kube-system/kube-apiserver-localhost" May 15 01:10:30.648653 kubelet[2022]: I0515 01:10:30.648611 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:30.648653 kubelet[2022]: I0515 01:10:30.648620 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:30.648653 kubelet[2022]: I0515 01:10:30.648629 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:30.648748 kubelet[2022]: I0515 01:10:30.648638 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ea44c54d4d7506222101f069816bc4b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ea44c54d4d7506222101f069816bc4b\") " pod="kube-system/kube-apiserver-localhost" May 15 01:10:30.648748 kubelet[2022]: I0515 01:10:30.648646 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:30.648748 kubelet[2022]: I0515 01:10:30.648654 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:30.648748 kubelet[2022]: I0515 01:10:30.648663 2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 01:10:30.752255 kubelet[2022]: I0515 01:10:30.751186 2022 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 01:10:30.752255 kubelet[2022]: E0515 01:10:30.751410 2022 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 15 01:10:30.875139 env[1378]: time="2025-05-15T01:10:30.875109384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ea44c54d4d7506222101f069816bc4b,Namespace:kube-system,Attempt:0,}" May 15 01:10:30.875632 env[1378]: time="2025-05-15T01:10:30.875569102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 01:10:30.876961 env[1378]: time="2025-05-15T01:10:30.876944781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 01:10:31.049166 kubelet[2022]: E0515 01:10:31.049086 2022 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" May 15 01:10:31.152528 kubelet[2022]: I0515 01:10:31.152513 2022 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 01:10:31.152892 kubelet[2022]: E0515 01:10:31.152880 2022 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 15 01:10:31.277876 kubelet[2022]: W0515 01:10:31.277836 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.278008 kubelet[2022]: E0515 01:10:31.277999 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.412268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809825791.mount: Deactivated successfully. May 15 01:10:31.412875 env[1378]: time="2025-05-15T01:10:31.412760626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.414215 env[1378]: time="2025-05-15T01:10:31.414164104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.415392 env[1378]: time="2025-05-15T01:10:31.415376338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.416271 env[1378]: time="2025-05-15T01:10:31.416255560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.416674 env[1378]: time="2025-05-15T01:10:31.416660579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.417112 env[1378]: time="2025-05-15T01:10:31.417097592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.418221 env[1378]: time="2025-05-15T01:10:31.418206670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.420307 env[1378]: time="2025-05-15T01:10:31.420292788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.420779 env[1378]: time="2025-05-15T01:10:31.420763537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.424399 env[1378]: time="2025-05-15T01:10:31.424385291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.428895 env[1378]: time="2025-05-15T01:10:31.427657396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.428895 env[1378]: time="2025-05-15T01:10:31.428137116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:31.444717 env[1378]: time="2025-05-15T01:10:31.444458517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:31.444717 env[1378]: time="2025-05-15T01:10:31.444494335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:31.444717 env[1378]: time="2025-05-15T01:10:31.444501845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:31.444717 env[1378]: time="2025-05-15T01:10:31.444571832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69fde90bbb756d797ec69cbf952e5a8d2ca8d5d1652c63eeb58d6db9b433560f pid=2075 runtime=io.containerd.runc.v2 May 15 01:10:31.445677 env[1378]: time="2025-05-15T01:10:31.445646232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:31.445766 env[1378]: time="2025-05-15T01:10:31.445667213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:31.445766 env[1378]: time="2025-05-15T01:10:31.445674061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:31.445918 env[1378]: time="2025-05-15T01:10:31.445886323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f668913f2083f85f5bb33a6b11024f76f224a6d1c6819dc0fa56fd7f1449b93 pid=2059 runtime=io.containerd.runc.v2 May 15 01:10:31.478269 env[1378]: time="2025-05-15T01:10:31.476774465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:31.478269 env[1378]: time="2025-05-15T01:10:31.476798325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:31.478269 env[1378]: time="2025-05-15T01:10:31.476805163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:31.478269 env[1378]: time="2025-05-15T01:10:31.476886068Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/699302c7171f0f32eaeff5fe0a593a90c88a540f6d633ff131d02f6166c303e0 pid=2114 runtime=io.containerd.runc.v2 May 15 01:10:31.515816 env[1378]: time="2025-05-15T01:10:31.515704716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f668913f2083f85f5bb33a6b11024f76f224a6d1c6819dc0fa56fd7f1449b93\"" May 15 01:10:31.519186 kubelet[2022]: W0515 01:10:31.519143 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.519257 kubelet[2022]: E0515 01:10:31.519193 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.521079 env[1378]: time="2025-05-15T01:10:31.521058044Z" level=info msg="CreateContainer within sandbox \"6f668913f2083f85f5bb33a6b11024f76f224a6d1c6819dc0fa56fd7f1449b93\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 01:10:31.534261 env[1378]: time="2025-05-15T01:10:31.534221128Z" level=info msg="CreateContainer within sandbox \"6f668913f2083f85f5bb33a6b11024f76f224a6d1c6819dc0fa56fd7f1449b93\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d420d492fcb2be50e826415376fec8c45d6cba331d0027ddcb3d3e90923b0f6f\"" May 15 01:10:31.534852 env[1378]: time="2025-05-15T01:10:31.534839941Z" level=info msg="StartContainer for \"d420d492fcb2be50e826415376fec8c45d6cba331d0027ddcb3d3e90923b0f6f\"" May 15 01:10:31.534964 kubelet[2022]: W0515 01:10:31.534924 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.535010 kubelet[2022]: E0515 01:10:31.534970 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.541765 env[1378]: time="2025-05-15T01:10:31.541731333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ea44c54d4d7506222101f069816bc4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"69fde90bbb756d797ec69cbf952e5a8d2ca8d5d1652c63eeb58d6db9b433560f\"" May 15 01:10:31.543194 env[1378]: time="2025-05-15T01:10:31.543175635Z" level=info msg="CreateContainer within sandbox \"69fde90bbb756d797ec69cbf952e5a8d2ca8d5d1652c63eeb58d6db9b433560f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 01:10:31.547719 env[1378]: time="2025-05-15T01:10:31.547692532Z" level=info msg="CreateContainer within sandbox \"69fde90bbb756d797ec69cbf952e5a8d2ca8d5d1652c63eeb58d6db9b433560f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23b5f0a5396ca0ac832cace0e21dbe3c2125515989888de78bb167db6ab5629a\"" May 15 01:10:31.547936 env[1378]: time="2025-05-15T01:10:31.547923179Z" level=info msg="StartContainer for \"23b5f0a5396ca0ac832cace0e21dbe3c2125515989888de78bb167db6ab5629a\"" May 15 01:10:31.554213 env[1378]: time="2025-05-15T01:10:31.554192290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"699302c7171f0f32eaeff5fe0a593a90c88a540f6d633ff131d02f6166c303e0\"" May 15 01:10:31.555556 env[1378]: time="2025-05-15T01:10:31.555542739Z" level=info msg="CreateContainer within sandbox \"699302c7171f0f32eaeff5fe0a593a90c88a540f6d633ff131d02f6166c303e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 01:10:31.589973 env[1378]: time="2025-05-15T01:10:31.589950226Z" level=info msg="CreateContainer within sandbox \"699302c7171f0f32eaeff5fe0a593a90c88a540f6d633ff131d02f6166c303e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"acbb675e7a84914da7a8151c21b4e555d418456862d07400208db461003faa8e\"" May 15 01:10:31.594836 env[1378]: time="2025-05-15T01:10:31.593418959Z" level=info msg="StartContainer for \"acbb675e7a84914da7a8151c21b4e555d418456862d07400208db461003faa8e\"" May 15 01:10:31.604370 kubelet[2022]: W0515 01:10:31.604303 2022 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.604370 kubelet[2022]: E0515 01:10:31.604373 2022 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:31.606642 env[1378]: time="2025-05-15T01:10:31.606619068Z" level=info msg="StartContainer for \"d420d492fcb2be50e826415376fec8c45d6cba331d0027ddcb3d3e90923b0f6f\" returns successfully" May 15 01:10:31.610503 env[1378]: time="2025-05-15T01:10:31.610480176Z" level=info msg="StartContainer for \"23b5f0a5396ca0ac832cace0e21dbe3c2125515989888de78bb167db6ab5629a\" returns successfully" May 15 01:10:31.656259 env[1378]: time="2025-05-15T01:10:31.654504797Z" level=info msg="StartContainer for \"acbb675e7a84914da7a8151c21b4e555d418456862d07400208db461003faa8e\" returns successfully" May 15 01:10:31.849486 kubelet[2022]: E0515 01:10:31.849411 2022 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" May 15 01:10:31.954800 kubelet[2022]: I0515 01:10:31.954586 2022 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 01:10:31.954800 kubelet[2022]: E0515 01:10:31.954769 2022 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 15 01:10:32.161269 kubelet[2022]: E0515 01:10:32.161243 2022 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.104:6443: connect: connection refused May 15 01:10:33.453404 kubelet[2022]: E0515 01:10:33.453381 2022 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 01:10:33.556489 kubelet[2022]: I0515 01:10:33.556465 2022 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 01:10:33.565205 kubelet[2022]: I0515 01:10:33.565180 2022 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 01:10:33.569770 kubelet[2022]: E0515 01:10:33.569753 2022 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 01:10:34.247851 kubelet[2022]: I0515 01:10:34.247821 2022 apiserver.go:52] "Watching apiserver" May 15 01:10:34.348097 kubelet[2022]: I0515 01:10:34.348080 2022 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 01:10:35.303677 systemd[1]: Reloading. May 15 01:10:35.373918 /usr/lib/systemd/system-generators/torcx-generator[2304]: time="2025-05-15T01:10:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 01:10:35.373938 /usr/lib/systemd/system-generators/torcx-generator[2304]: time="2025-05-15T01:10:35Z" level=info msg="torcx already run" May 15 01:10:35.425869 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 01:10:35.425980 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 01:10:35.437737 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 01:10:35.502506 kubelet[2022]: E0515 01:10:35.502442 2022 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183f8e1bc29973ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 01:10:30.323139502 +0000 UTC m=+0.497963513,LastTimestamp:2025-05-15 01:10:30.323139502 +0000 UTC m=+0.497963513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 01:10:35.504013 systemd[1]: Stopping kubelet.service... May 15 01:10:35.513489 systemd[1]: kubelet.service: Deactivated successfully. May 15 01:10:35.513717 systemd[1]: Stopped kubelet.service. May 15 01:10:35.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:35.514478 kernel: kauditd_printk_skb: 46 callbacks suppressed May 15 01:10:35.514524 kernel: audit: type=1131 audit(1747271435.512:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:35.515494 systemd[1]: Starting kubelet.service... May 15 01:10:36.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:36.505931 systemd[1]: Started kubelet.service. May 15 01:10:36.510342 kernel: audit: type=1130 audit(1747271436.504:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:36.660090 kubelet[2379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:10:36.660444 kubelet[2379]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 01:10:36.660502 kubelet[2379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 01:10:36.660628 kubelet[2379]: I0515 01:10:36.660598 2379 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 01:10:36.666709 kubelet[2379]: I0515 01:10:36.666688 2379 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 01:10:36.666709 kubelet[2379]: I0515 01:10:36.666706 2379 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 01:10:36.667084 kubelet[2379]: I0515 01:10:36.667071 2379 server.go:927] "Client rotation is on, will bootstrap in background" May 15 01:10:36.669371 kubelet[2379]: I0515 01:10:36.669356 2379 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 01:10:36.670926 kubelet[2379]: I0515 01:10:36.670906 2379 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 01:10:36.677608 kubelet[2379]: I0515 01:10:36.677595 2379 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 01:10:36.687216 kubelet[2379]: I0515 01:10:36.687185 2379 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 01:10:36.687437 kubelet[2379]: I0515 01:10:36.687315 2379 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 01:10:36.687545 kubelet[2379]: I0515 01:10:36.687536 2379 topology_manager.go:138] "Creating topology manager with none policy" May 15 01:10:36.687592 kubelet[2379]: I0515 01:10:36.687585 2379 container_manager_linux.go:301] "Creating device plugin manager" May 15 01:10:36.689073 kubelet[2379]: I0515 01:10:36.689065 2379 state_mem.go:36] "Initialized new in-memory state store" May 15 01:10:36.689187 kubelet[2379]: I0515 01:10:36.689180 2379 kubelet.go:400] "Attempting to sync node with API server" May 15 01:10:36.689246 kubelet[2379]: I0515 01:10:36.689229 2379 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 01:10:36.689313 kubelet[2379]: I0515 01:10:36.689305 2379 kubelet.go:312] "Adding apiserver pod source" May 15 01:10:36.689364 kubelet[2379]: I0515 01:10:36.689356 2379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 01:10:36.691656 kubelet[2379]: I0515 01:10:36.691648 2379 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 01:10:36.691786 kubelet[2379]: I0515 01:10:36.691779 2379 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 01:10:36.692062 kubelet[2379]: I0515 01:10:36.692054 2379 server.go:1264] "Started kubelet" May 15 01:10:36.695000 audit[2379]: AVC avc: denied { mac_admin } for pid=2379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:36.698376 kubelet[2379]: I0515 01:10:36.698346 2379 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 01:10:36.699000 kubelet[2379]: I0515 01:10:36.698992 2379 server.go:455] "Adding debug handlers to kubelet server" May 15 01:10:36.695000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:36.701097 kernel: audit: type=1400 audit(1747271436.695:226): avc: denied { mac_admin } for pid=2379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:36.701150 kernel: audit: type=1401 audit(1747271436.695:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:36.701170 kernel: audit: type=1300 audit(1747271436.695:226): arch=c000003e syscall=188 success=no exit=-22 a0=c000c3ac90 a1=c000b79620 a2=c000c3ac60 a3=25 items=0 ppid=1 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:36.695000 audit[2379]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c3ac90 a1=c000b79620 a2=c000c3ac60 a3=25 items=0 ppid=1 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:36.701928 kubelet[2379]: I0515 01:10:36.701900 2379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 01:10:36.702091 kubelet[2379]: I0515 01:10:36.702084 2379 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 01:10:36.704558 kubelet[2379]: I0515 01:10:36.704450 2379 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 15 01:10:36.704558 kubelet[2379]: I0515 01:10:36.704481 2379 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 15 01:10:36.704558 kubelet[2379]: I0515 01:10:36.704496 2379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 01:10:36.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:36.706983 kubelet[2379]: I0515 01:10:36.706963 2379 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 01:10:36.708031 kernel: audit: type=1327 audit(1747271436.695:226): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:36.708785 kubelet[2379]: I0515 01:10:36.708776 2379 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 01:10:36.708943 kubelet[2379]: I0515 01:10:36.708936 2379 reconciler.go:26] "Reconciler: start to sync state" May 15 01:10:36.703000 audit[2379]: AVC avc: denied { mac_admin } for pid=2379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:36.713259 kernel: audit: type=1400 audit(1747271436.703:227): avc: denied { mac_admin } for pid=2379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:36.703000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:36.715383 kernel: audit: type=1401 audit(1747271436.703:227): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:36.703000 audit[2379]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c52200 a1=c000b79638 a2=c000c3ad20 a3=25 items=0 ppid=1 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:36.720439 kernel: audit: type=1300 audit(1747271436.703:227): arch=c000003e syscall=188 success=no exit=-22 a0=c000c52200 a1=c000b79638 a2=c000c3ad20 a3=25 items=0 ppid=1 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:36.720652 kubelet[2379]: I0515 01:10:36.720638 2379 factory.go:221] Registration of the systemd container factory successfully May 15 01:10:36.720841 kubelet[2379]: I0515 01:10:36.720829 2379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 01:10:36.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:36.725242 kernel: audit: type=1327 audit(1747271436.703:227): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:36.725877 kubelet[2379]: I0515 01:10:36.725869 2379 factory.go:221] Registration of the containerd container factory successfully May 15 01:10:36.733718 kubelet[2379]: I0515 01:10:36.733697 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 01:10:36.734694 kubelet[2379]: I0515 01:10:36.734677 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 01:10:36.734734 kubelet[2379]: I0515 01:10:36.734697 2379 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 01:10:36.734734 kubelet[2379]: I0515 01:10:36.734712 2379 kubelet.go:2337] "Starting kubelet main sync loop" May 15 01:10:36.734781 kubelet[2379]: E0515 01:10:36.734735 2379 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 01:10:36.776539 kubelet[2379]: I0515 01:10:36.775579 2379 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 01:10:36.776539 kubelet[2379]: I0515 01:10:36.776255 2379 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 01:10:36.776539 kubelet[2379]: I0515 01:10:36.776270 2379 state_mem.go:36] "Initialized new in-memory state store" May 15 01:10:36.776539 kubelet[2379]: I0515 01:10:36.776408 2379 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 01:10:36.776539 kubelet[2379]: I0515 01:10:36.776416 2379 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 01:10:36.776539 kubelet[2379]: I0515 01:10:36.776442 2379 policy_none.go:49] "None policy: Start" May 15 01:10:36.778181 kubelet[2379]: I0515 01:10:36.778155 2379 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 01:10:36.778221 kubelet[2379]: I0515 01:10:36.778185 2379 state_mem.go:35] "Initializing new in-memory state store" May 15 01:10:36.778323 kubelet[2379]: I0515 01:10:36.778312 2379 state_mem.go:75] "Updated machine memory state" May 15 01:10:36.780358 kubelet[2379]: I0515 01:10:36.780345 2379 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 01:10:36.779000 audit[2379]: AVC avc: denied { mac_admin } for pid=2379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:10:36.779000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 01:10:36.779000 audit[2379]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d450b0 a1=c001045290 a2=c000d45080 a3=25 items=0 ppid=1 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:36.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 01:10:36.783956 kubelet[2379]: I0515 01:10:36.783918 2379 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 15 01:10:36.784042 kubelet[2379]: I0515 01:10:36.784019 2379 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 01:10:36.787294 kubelet[2379]: I0515 01:10:36.787286 2379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 01:10:36.808442 kubelet[2379]: I0515 01:10:36.808430 2379 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 01:10:36.818660 kubelet[2379]: I0515 01:10:36.817133 2379 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 01:10:36.818660 kubelet[2379]: I0515 01:10:36.817171 2379 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 01:10:36.835607 kubelet[2379]: I0515 01:10:36.835585 2379 topology_manager.go:215] "Topology Admit Handler" podUID="0ea44c54d4d7506222101f069816bc4b" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 01:10:36.835766 kubelet[2379]: I0515 01:10:36.835757 2379 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 01:10:36.836016 kubelet[2379]: I0515 01:10:36.836006 2379 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 01:10:36.909447 kubelet[2379]: I0515 01:10:36.909428 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:36.909573 kubelet[2379]: I0515 01:10:36.909563 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:36.909632 kubelet[2379]: I0515 01:10:36.909624 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:36.909690 kubelet[2379]: I0515 01:10:36.909682 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 01:10:36.909748 kubelet[2379]: I0515 01:10:36.909734 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ea44c54d4d7506222101f069816bc4b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea44c54d4d7506222101f069816bc4b\") " pod="kube-system/kube-apiserver-localhost" May 15 01:10:36.909802 kubelet[2379]: I0515 01:10:36.909794 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ea44c54d4d7506222101f069816bc4b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea44c54d4d7506222101f069816bc4b\") " pod="kube-system/kube-apiserver-localhost" May 15 01:10:36.909857 kubelet[2379]: I0515 01:10:36.909844 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:36.909911 kubelet[2379]: I0515 01:10:36.909903 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ea44c54d4d7506222101f069816bc4b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ea44c54d4d7506222101f069816bc4b\") " pod="kube-system/kube-apiserver-localhost" May 15 01:10:36.911347 kubelet[2379]: I0515 01:10:36.911329 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 01:10:37.689983 kubelet[2379]: I0515 01:10:37.689959 2379 apiserver.go:52] "Watching apiserver" May 15 01:10:37.710248 kubelet[2379]: I0515 01:10:37.709293 2379 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 01:10:37.764295 kubelet[2379]: E0515 01:10:37.763220 2379 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 01:10:37.772571 kubelet[2379]: I0515 01:10:37.772543 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7725333669999999 podStartE2EDuration="1.772533367s" podCreationTimestamp="2025-05-15 01:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:10:37.77011031 +0000 UTC m=+1.241993401" watchObservedRunningTime="2025-05-15 01:10:37.772533367 +0000 UTC m=+1.244416453" May 15 01:10:37.774658 kubelet[2379]: I0515 01:10:37.774630 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.774622116 podStartE2EDuration="1.774622116s" podCreationTimestamp="2025-05-15 01:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:10:37.774544333 +0000 UTC m=+1.246450009" watchObservedRunningTime="2025-05-15 01:10:37.774622116 +0000 UTC m=+1.246505202" May 15 01:10:37.781921 kubelet[2379]: I0515 01:10:37.781894 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.78187217 podStartE2EDuration="1.78187217s" podCreationTimestamp="2025-05-15 01:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:10:37.777889215 +0000 UTC m=+1.249772309" watchObservedRunningTime="2025-05-15 01:10:37.78187217 +0000 UTC m=+1.253755264" May 15 01:10:40.643290 sudo[1644]: pam_unix(sudo:session): session closed for user root May 15 01:10:40.644630 kernel: kauditd_printk_skb: 4 callbacks suppressed May 15 01:10:40.644684 kernel: audit: type=1106 audit(1747271440.642:229): pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:40.642000 audit[1644]: USER_END pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:40.648418 kernel: audit: type=1104 audit(1747271440.643:230): pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:40.643000 audit[1644]: CRED_DISP pid=1644 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 01:10:40.653889 sshd[1636]: pam_unix(sshd:session): session closed for user core May 15 01:10:40.653000 audit[1636]: USER_END pid=1636 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:40.658931 systemd[1]: sshd@7-139.178.70.104:22-147.75.109.163:33346.service: Deactivated successfully. May 15 01:10:40.659249 kernel: audit: type=1106 audit(1747271440.653:231): pid=1636 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:40.659481 systemd[1]: session-9.scope: Deactivated successfully. May 15 01:10:40.660323 systemd-logind[1361]: Session 9 logged out. Waiting for processes to exit. May 15 01:10:40.660874 systemd-logind[1361]: Removed session 9. May 15 01:10:40.653000 audit[1636]: CRED_DISP pid=1636 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:40.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.104:22-147.75.109.163:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:40.666775 kernel: audit: type=1104 audit(1747271440.653:232): pid=1636 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:10:40.666808 kernel: audit: type=1131 audit(1747271440.657:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.104:22-147.75.109.163:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:10:49.440088 kubelet[2379]: I0515 01:10:49.440056 2379 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 01:10:49.440528 env[1378]: time="2025-05-15T01:10:49.440509198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 01:10:49.440802 kubelet[2379]: I0515 01:10:49.440791 2379 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 01:10:50.173803 kubelet[2379]: I0515 01:10:50.173775 2379 topology_manager.go:215] "Topology Admit Handler" podUID="00304ac6-873e-4e65-9f13-bf470a4824bd" podNamespace="kube-system" podName="kube-proxy-924fr" May 15 01:10:50.202848 kubelet[2379]: I0515 01:10:50.202816 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00304ac6-873e-4e65-9f13-bf470a4824bd-kube-proxy\") pod \"kube-proxy-924fr\" (UID: \"00304ac6-873e-4e65-9f13-bf470a4824bd\") " pod="kube-system/kube-proxy-924fr" May 15 01:10:50.202848 kubelet[2379]: I0515 01:10:50.202845 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00304ac6-873e-4e65-9f13-bf470a4824bd-lib-modules\") pod \"kube-proxy-924fr\" (UID: \"00304ac6-873e-4e65-9f13-bf470a4824bd\") " pod="kube-system/kube-proxy-924fr" May 15 01:10:50.203016 kubelet[2379]: I0515 01:10:50.202875 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00304ac6-873e-4e65-9f13-bf470a4824bd-xtables-lock\") pod \"kube-proxy-924fr\" (UID: \"00304ac6-873e-4e65-9f13-bf470a4824bd\") " pod="kube-system/kube-proxy-924fr" May 15 01:10:50.203016 kubelet[2379]: I0515 01:10:50.202892 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng5bs\" (UniqueName: \"kubernetes.io/projected/00304ac6-873e-4e65-9f13-bf470a4824bd-kube-api-access-ng5bs\") pod \"kube-proxy-924fr\" (UID: \"00304ac6-873e-4e65-9f13-bf470a4824bd\") " pod="kube-system/kube-proxy-924fr" May 15 01:10:50.347688 kubelet[2379]: I0515 01:10:50.347662 2379 topology_manager.go:215] "Topology Admit Handler" podUID="88325b1e-a367-4b96-b5a3-564411d41087" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-spmgn" May 15 01:10:50.404329 kubelet[2379]: I0515 01:10:50.404295 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgplq\" (UniqueName: \"kubernetes.io/projected/88325b1e-a367-4b96-b5a3-564411d41087-kube-api-access-cgplq\") pod \"tigera-operator-797db67f8-spmgn\" (UID: \"88325b1e-a367-4b96-b5a3-564411d41087\") " pod="tigera-operator/tigera-operator-797db67f8-spmgn" May 15 01:10:50.404329 kubelet[2379]: I0515 01:10:50.404329 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88325b1e-a367-4b96-b5a3-564411d41087-var-lib-calico\") pod \"tigera-operator-797db67f8-spmgn\" (UID: \"88325b1e-a367-4b96-b5a3-564411d41087\") " pod="tigera-operator/tigera-operator-797db67f8-spmgn" May 15 01:10:50.477984 env[1378]: time="2025-05-15T01:10:50.477683842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-924fr,Uid:00304ac6-873e-4e65-9f13-bf470a4824bd,Namespace:kube-system,Attempt:0,}" May 15 01:10:50.490188 env[1378]: time="2025-05-15T01:10:50.490134736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:50.490311 env[1378]: time="2025-05-15T01:10:50.490295698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:50.490388 env[1378]: time="2025-05-15T01:10:50.490374430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:50.490775 env[1378]: time="2025-05-15T01:10:50.490739719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76a9281cb2804f31359fd527edaf027f070a70589f7a041fb5b12a7d0a1e864b pid=2464 runtime=io.containerd.runc.v2 May 15 01:10:50.527391 env[1378]: time="2025-05-15T01:10:50.527368617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-924fr,Uid:00304ac6-873e-4e65-9f13-bf470a4824bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"76a9281cb2804f31359fd527edaf027f070a70589f7a041fb5b12a7d0a1e864b\"" May 15 01:10:50.529883 env[1378]: time="2025-05-15T01:10:50.529866842Z" level=info msg="CreateContainer within sandbox \"76a9281cb2804f31359fd527edaf027f070a70589f7a041fb5b12a7d0a1e864b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 01:10:50.629476 env[1378]: time="2025-05-15T01:10:50.629440959Z" level=info msg="CreateContainer within sandbox \"76a9281cb2804f31359fd527edaf027f070a70589f7a041fb5b12a7d0a1e864b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ba1d040d33853f80bcdb4baff33754f50c0ddfc6a6099f7a67ce8c217abaff0\"" May 15 01:10:50.630783 env[1378]: time="2025-05-15T01:10:50.630763379Z" level=info msg="StartContainer for \"2ba1d040d33853f80bcdb4baff33754f50c0ddfc6a6099f7a67ce8c217abaff0\"" May 15 01:10:50.656508 env[1378]: time="2025-05-15T01:10:50.656473251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-spmgn,Uid:88325b1e-a367-4b96-b5a3-564411d41087,Namespace:tigera-operator,Attempt:0,}" May 15 01:10:50.685858 env[1378]: time="2025-05-15T01:10:50.685826730Z" level=info msg="StartContainer for \"2ba1d040d33853f80bcdb4baff33754f50c0ddfc6a6099f7a67ce8c217abaff0\" returns successfully" May 15 01:10:50.697221 env[1378]: time="2025-05-15T01:10:50.697164023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:50.697221 env[1378]: time="2025-05-15T01:10:50.697217818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:50.697383 env[1378]: time="2025-05-15T01:10:50.697247929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:50.697383 env[1378]: time="2025-05-15T01:10:50.697347607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/412eb0304973281b1fc36ba8099b6ee23c6cbf73e652b5dcad8ff95109ac0aa8 pid=2536 runtime=io.containerd.runc.v2 May 15 01:10:50.745450 env[1378]: time="2025-05-15T01:10:50.744734621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-spmgn,Uid:88325b1e-a367-4b96-b5a3-564411d41087,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"412eb0304973281b1fc36ba8099b6ee23c6cbf73e652b5dcad8ff95109ac0aa8\"" May 15 01:10:50.748903 env[1378]: time="2025-05-15T01:10:50.748877819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 01:10:50.797751 kubelet[2379]: I0515 01:10:50.795924 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-924fr" podStartSLOduration=0.795913016 podStartE2EDuration="795.913016ms" podCreationTimestamp="2025-05-15 01:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:10:50.795608136 +0000 UTC m=+14.267491229" watchObservedRunningTime="2025-05-15 01:10:50.795913016 +0000 UTC m=+14.267796103" May 15 01:10:51.898000 audit[2599]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:51.898000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca6281340 a2=0 a3=7ffca628132c items=0 ppid=2516 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.907053 kernel: audit: type=1325 audit(1747271451.898:234): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:51.907151 kernel: audit: type=1300 audit(1747271451.898:234): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca6281340 a2=0 a3=7ffca628132c items=0 ppid=2516 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.902000 audit[2600]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:51.911247 kernel: audit: type=1325 audit(1747271451.902:235): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:51.902000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcebc55e60 a2=0 a3=7ffcebc55e4c items=0 ppid=2516 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.917937 kernel: audit: type=1300 audit(1747271451.902:235): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcebc55e60 a2=0 a3=7ffcebc55e4c items=0 ppid=2516 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.917994 kernel: audit: type=1327 audit(1747271451.902:235): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 01:10:51.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 01:10:51.919881 kernel: audit: type=1325 audit(1747271451.902:236): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:51.902000 audit[2602]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:51.902000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5f670100 a2=0 a3=7ffc5f6700ec items=0 ppid=2516 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.923750 kernel: audit: type=1300 audit(1747271451.902:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5f670100 a2=0 a3=7ffc5f6700ec items=0 ppid=2516 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.923786 kernel: audit: type=1327 audit(1747271451.902:236): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 01:10:51.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 01:10:51.902000 audit[2603]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:51.927407 kernel: audit: type=1325 audit(1747271451.902:237): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:51.927439 kernel: audit: type=1300 audit(1747271451.902:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa674a730 a2=0 a3=7fffa674a71c items=0 ppid=2516 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.902000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa674a730 a2=0 a3=7fffa674a71c items=0 ppid=2516 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 01:10:51.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 01:10:51.908000 audit[2604]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:51.908000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd844420 a2=0 a3=7ffdfd84440c items=0 ppid=2516 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 01:10:51.909000 audit[2605]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:51.909000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed665dcf0 a2=0 a3=7ffed665dcdc items=0 ppid=2516 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:51.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 01:10:52.001000 audit[2606]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.001000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd42c70250 a2=0 a3=7ffd42c7023c items=0 ppid=2516 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 15 01:10:52.144000 audit[2608]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.144000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc96f3fa80 a2=0 a3=7ffc96f3fa6c items=0 ppid=2516 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 15 01:10:52.187000 audit[2611]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.187000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcf7367440 a2=0 a3=7ffcf736742c items=0 ppid=2516 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 15 01:10:52.188000 audit[2612]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.188000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbe573480 a2=0 a3=7fffbe57346c items=0 ppid=2516 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 15 01:10:52.191000 audit[2614]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2614 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.191000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf6f97c50 a2=0 a3=7ffcf6f97c3c items=0 ppid=2516 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 15 01:10:52.192000 audit[2615]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2615 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.192000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc680fa1b0 a2=0 a3=7ffc680fa19c items=0 ppid=2516 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 15 01:10:52.194000 audit[2617]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.194000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1ab88680 a2=0 a3=7ffd1ab8866c items=0 ppid=2516 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 15 01:10:52.197000 audit[2620]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2620 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.197000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc4c80fc80 a2=0 a3=7ffc4c80fc6c items=0 ppid=2516 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 15 01:10:52.198000 audit[2621]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2621 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.198000 audit[2621]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd46dd2f0 a2=0 a3=7fffd46dd2dc items=0 ppid=2516 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 15 01:10:52.199000 audit[2623]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.199000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1363d3b0 a2=0 a3=7fff1363d39c items=0 ppid=2516 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 15 01:10:52.200000 audit[2624]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2624 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.200000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe7e3ad70 a2=0 a3=7fffe7e3ad5c items=0 ppid=2516 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 15 01:10:52.202000 audit[2626]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.202000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd5d275160 a2=0 a3=7ffd5d27514c items=0 ppid=2516 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 01:10:52.204000 audit[2629]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2629 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.204000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4c809700 a2=0 a3=7fff4c8096ec items=0 ppid=2516 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 01:10:52.206000 audit[2632]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2632 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.206000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc44f27810 a2=0 a3=7ffc44f277fc items=0 ppid=2516 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 15 01:10:52.207000 audit[2633]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.207000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf04e70f0 a2=0 a3=7ffcf04e70dc items=0 ppid=2516 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 15 01:10:52.215000 audit[2635]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.215000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff6d32e460 a2=0 a3=7fff6d32e44c items=0 ppid=2516 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 01:10:52.217000 audit[2638]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.217000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe067439e0 a2=0 a3=7ffe067439cc items=0 ppid=2516 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 01:10:52.217000 audit[2639]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.217000 audit[2639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbaa22d80 a2=0 a3=7ffcbaa22d6c items=0 ppid=2516 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 15 01:10:52.219000 audit[2641]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2641 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 01:10:52.219000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd20557ef0 a2=0 a3=7ffd20557edc items=0 ppid=2516 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 15 01:10:52.362000 audit[2647]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:52.362000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcbbf293c0 a2=0 a3=7ffcbbf293ac items=0 ppid=2516 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:52.372000 audit[2647]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:52.372000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcbbf293c0 a2=0 a3=7ffcbbf293ac items=0 ppid=2516 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:52.374000 audit[2653]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.374000 audit[2653]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc02cc7f50 a2=0 a3=7ffc02cc7f3c items=0 ppid=2516 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 15 01:10:52.376000 audit[2655]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.376000 audit[2655]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd57e180f0 a2=0 a3=7ffd57e180dc items=0 ppid=2516 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 15 01:10:52.379000 audit[2658]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.379000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc9bbd97b0 a2=0 a3=7ffc9bbd979c items=0 ppid=2516 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.379000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 15 01:10:52.380000 audit[2659]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.380000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9acf0b50 a2=0 a3=7fff9acf0b3c items=0 ppid=2516 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 15 01:10:52.381000 audit[2661]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.381000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec20d0350 a2=0 a3=7ffec20d033c items=0 ppid=2516 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 15 01:10:52.382000 audit[2662]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.382000 audit[2662]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd725ec060 a2=0 a3=7ffd725ec04c items=0 ppid=2516 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 15 01:10:52.384000 audit[2664]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.384000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe1e59de70 a2=0 a3=7ffe1e59de5c items=0 ppid=2516 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 15 01:10:52.387000 audit[2667]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.387000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff66586180 a2=0 a3=7fff6658616c items=0 ppid=2516 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 15 01:10:52.387000 audit[2668]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.387000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4aa48ec0 a2=0 a3=7ffc4aa48eac items=0 ppid=2516 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 15 01:10:52.389000 audit[2670]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.389000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef6e1e850 a2=0 a3=7ffef6e1e83c items=0 ppid=2516 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 15 01:10:52.390000 audit[2671]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.390000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffbe299670 a2=0 a3=7fffbe29965c items=0 ppid=2516 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.390000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 15 01:10:52.391000 audit[2673]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.391000 audit[2673]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0e1919b0 a2=0 a3=7ffd0e19199c items=0 ppid=2516 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 01:10:52.394000 audit[2676]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.394000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0ecdc110 a2=0 a3=7ffc0ecdc0fc items=0 ppid=2516 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 15 01:10:52.396000 audit[2679]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.396000 audit[2679]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdae8c5ce0 a2=0 a3=7ffdae8c5ccc items=0 ppid=2516 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 15 01:10:52.397000 audit[2680]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.397000 audit[2680]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2a97eeb0 a2=0 a3=7ffe2a97ee9c items=0 ppid=2516 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 15 01:10:52.398000 audit[2682]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.398000 audit[2682]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff6e99fea0 a2=0 a3=7fff6e99fe8c items=0 ppid=2516 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 01:10:52.401000 audit[2685]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.401000 audit[2685]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffffad90980 a2=0 a3=7ffffad9096c items=0 ppid=2516 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 01:10:52.402000 audit[2686]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.402000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed577d1b0 a2=0 a3=7ffed577d19c items=0 ppid=2516 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 15 01:10:52.403000 audit[2688]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.403000 audit[2688]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe5e5c60c0 a2=0 a3=7ffe5e5c60ac items=0 ppid=2516 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 15 01:10:52.404000 audit[2689]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.404000 audit[2689]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7288020 a2=0 a3=7ffee728800c items=0 ppid=2516 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 15 01:10:52.406000 audit[2691]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.406000 audit[2691]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe58dc4ad0 a2=0 a3=7ffe58dc4abc items=0 ppid=2516 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 01:10:52.408000 audit[2694]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 01:10:52.408000 audit[2694]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8e9f17c0 a2=0 a3=7ffc8e9f17ac items=0 ppid=2516 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 01:10:52.410000 audit[2696]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 15 01:10:52.410000 audit[2696]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc75d76c00 a2=0 a3=7ffc75d76bec items=0 ppid=2516 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.410000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:52.410000 audit[2696]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 15 01:10:52.410000 audit[2696]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc75d76c00 a2=0 a3=7ffc75d76bec items=0 ppid=2516 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:52.410000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:52.759547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081749061.mount: Deactivated successfully. May 15 01:10:53.531094 env[1378]: time="2025-05-15T01:10:53.531057264Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:53.545935 env[1378]: time="2025-05-15T01:10:53.545909998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:53.551849 env[1378]: time="2025-05-15T01:10:53.551822950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:53.557633 env[1378]: time="2025-05-15T01:10:53.557616991Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:10:53.558025 env[1378]: time="2025-05-15T01:10:53.558007035Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 01:10:53.569553 env[1378]: time="2025-05-15T01:10:53.569536550Z" level=info msg="CreateContainer within sandbox \"412eb0304973281b1fc36ba8099b6ee23c6cbf73e652b5dcad8ff95109ac0aa8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 01:10:53.583493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612759244.mount: Deactivated successfully. May 15 01:10:53.588487 env[1378]: time="2025-05-15T01:10:53.588459864Z" level=info msg="CreateContainer within sandbox \"412eb0304973281b1fc36ba8099b6ee23c6cbf73e652b5dcad8ff95109ac0aa8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"07de85a76713ccfb44e5e1ab5c01a3ce93b74c047a934685131382fbc8a1e21c\"" May 15 01:10:53.588926 env[1378]: time="2025-05-15T01:10:53.588910178Z" level=info msg="StartContainer for \"07de85a76713ccfb44e5e1ab5c01a3ce93b74c047a934685131382fbc8a1e21c\"" May 15 01:10:53.628812 env[1378]: time="2025-05-15T01:10:53.628772684Z" level=info msg="StartContainer for \"07de85a76713ccfb44e5e1ab5c01a3ce93b74c047a934685131382fbc8a1e21c\" returns successfully" May 15 01:10:53.726550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975674265.mount: Deactivated successfully. May 15 01:10:56.451000 audit[2735]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:56.451000 audit[2735]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff27542940 a2=0 a3=7fff2754292c items=0 ppid=2516 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:56.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:56.455000 audit[2735]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:56.455000 audit[2735]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff27542940 a2=0 a3=0 items=0 ppid=2516 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:56.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:56.463000 audit[2737]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:56.463000 audit[2737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd99f2a6c0 a2=0 a3=7ffd99f2a6ac items=0 ppid=2516 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:56.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:56.468000 audit[2737]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:56.468000 audit[2737]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd99f2a6c0 a2=0 a3=0 items=0 ppid=2516 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:56.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:56.697786 kubelet[2379]: I0515 01:10:56.697753 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-spmgn" podStartSLOduration=3.885330873 podStartE2EDuration="6.697740343s" podCreationTimestamp="2025-05-15 01:10:50 +0000 UTC" firstStartedPulling="2025-05-15 01:10:50.7463727 +0000 UTC m=+14.218255784" lastFinishedPulling="2025-05-15 01:10:53.558782166 +0000 UTC m=+17.030665254" observedRunningTime="2025-05-15 01:10:53.872055229 +0000 UTC m=+17.343938323" watchObservedRunningTime="2025-05-15 01:10:56.697740343 +0000 UTC m=+20.169623433" May 15 01:10:56.698315 kubelet[2379]: I0515 01:10:56.698301 2379 topology_manager.go:215] "Topology Admit Handler" podUID="17854bce-67b1-4319-98ae-62f549924452" podNamespace="calico-system" podName="calico-typha-f4f67cc77-5nb7d" May 15 01:10:56.751550 kubelet[2379]: I0515 01:10:56.751473 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17854bce-67b1-4319-98ae-62f549924452-typha-certs\") pod \"calico-typha-f4f67cc77-5nb7d\" (UID: \"17854bce-67b1-4319-98ae-62f549924452\") " pod="calico-system/calico-typha-f4f67cc77-5nb7d" May 15 01:10:56.751678 kubelet[2379]: I0515 01:10:56.751663 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69n4b\" (UniqueName: \"kubernetes.io/projected/17854bce-67b1-4319-98ae-62f549924452-kube-api-access-69n4b\") pod \"calico-typha-f4f67cc77-5nb7d\" (UID: \"17854bce-67b1-4319-98ae-62f549924452\") " pod="calico-system/calico-typha-f4f67cc77-5nb7d" May 15 01:10:56.751742 kubelet[2379]: I0515 01:10:56.751732 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17854bce-67b1-4319-98ae-62f549924452-tigera-ca-bundle\") pod \"calico-typha-f4f67cc77-5nb7d\" (UID: \"17854bce-67b1-4319-98ae-62f549924452\") " pod="calico-system/calico-typha-f4f67cc77-5nb7d" May 15 01:10:56.859273 kubelet[2379]: I0515 01:10:56.859220 2379 topology_manager.go:215] "Topology Admit Handler" podUID="189c4de8-e8f1-4114-a090-fa3fd31b5ca8" podNamespace="calico-system" podName="calico-node-l85sc" May 15 01:10:56.953209 kubelet[2379]: I0515 01:10:56.953173 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-xtables-lock\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953346 kubelet[2379]: I0515 01:10:56.953219 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-net-dir\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953346 kubelet[2379]: I0515 01:10:56.953245 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-tigera-ca-bundle\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953346 kubelet[2379]: I0515 01:10:56.953259 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-node-certs\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953346 kubelet[2379]: I0515 01:10:56.953274 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-run-calico\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953346 kubelet[2379]: I0515 01:10:56.953288 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-lib-calico\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953754 kubelet[2379]: I0515 01:10:56.953302 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-policysync\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953754 kubelet[2379]: I0515 01:10:56.953326 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-bin-dir\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953754 kubelet[2379]: I0515 01:10:56.953344 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-lib-modules\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953754 kubelet[2379]: I0515 01:10:56.953359 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-flexvol-driver-host\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953754 kubelet[2379]: I0515 01:10:56.953374 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-log-dir\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.953849 kubelet[2379]: I0515 01:10:56.953387 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccsw\" (UniqueName: \"kubernetes.io/projected/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-kube-api-access-bccsw\") pod \"calico-node-l85sc\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " pod="calico-system/calico-node-l85sc" May 15 01:10:56.963507 kubelet[2379]: I0515 01:10:56.963467 2379 topology_manager.go:215] "Topology Admit Handler" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" podNamespace="calico-system" podName="csi-node-driver-lxhjw" May 15 01:10:56.963755 kubelet[2379]: E0515 01:10:56.963735 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:10:57.007301 env[1378]: time="2025-05-15T01:10:57.007223544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4f67cc77-5nb7d,Uid:17854bce-67b1-4319-98ae-62f549924452,Namespace:calico-system,Attempt:0,}" May 15 01:10:57.053997 kubelet[2379]: I0515 01:10:57.053964 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5-varrun\") pod \"csi-node-driver-lxhjw\" (UID: \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\") " pod="calico-system/csi-node-driver-lxhjw" May 15 01:10:57.054090 kubelet[2379]: I0515 01:10:57.054007 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5-registration-dir\") pod \"csi-node-driver-lxhjw\" (UID: \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\") " pod="calico-system/csi-node-driver-lxhjw" May 15 01:10:57.054090 kubelet[2379]: I0515 01:10:57.054034 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5-kubelet-dir\") pod \"csi-node-driver-lxhjw\" (UID: \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\") " pod="calico-system/csi-node-driver-lxhjw" May 15 01:10:57.054090 kubelet[2379]: I0515 01:10:57.054087 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5-socket-dir\") pod \"csi-node-driver-lxhjw\" (UID: \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\") " pod="calico-system/csi-node-driver-lxhjw" May 15 01:10:57.054160 kubelet[2379]: I0515 01:10:57.054108 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqxsd\" (UniqueName: \"kubernetes.io/projected/9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5-kube-api-access-bqxsd\") pod \"csi-node-driver-lxhjw\" (UID: \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\") " pod="calico-system/csi-node-driver-lxhjw" May 15 01:10:57.060761 kubelet[2379]: E0515 01:10:57.060735 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.060761 kubelet[2379]: W0515 01:10:57.060749 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.060761 kubelet[2379]: E0515 01:10:57.060765 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.071623 kubelet[2379]: E0515 01:10:57.071611 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.071706 kubelet[2379]: W0515 01:10:57.071696 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.071765 kubelet[2379]: E0515 01:10:57.071755 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.090803 env[1378]: time="2025-05-15T01:10:57.090754270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:57.090803 env[1378]: time="2025-05-15T01:10:57.090782260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:57.090928 env[1378]: time="2025-05-15T01:10:57.090905914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:57.091125 env[1378]: time="2025-05-15T01:10:57.091099101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f pid=2751 runtime=io.containerd.runc.v2 May 15 01:10:57.138288 env[1378]: time="2025-05-15T01:10:57.138256126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4f67cc77-5nb7d,Uid:17854bce-67b1-4319-98ae-62f549924452,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\"" May 15 01:10:57.139855 env[1378]: time="2025-05-15T01:10:57.139822479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 01:10:57.155358 kubelet[2379]: E0515 01:10:57.155271 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.155358 kubelet[2379]: W0515 01:10:57.155284 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.155358 kubelet[2379]: E0515 01:10:57.155297 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.155590 kubelet[2379]: E0515 01:10:57.155521 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.155590 kubelet[2379]: W0515 01:10:57.155528 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.155590 kubelet[2379]: E0515 01:10:57.155538 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.155734 kubelet[2379]: E0515 01:10:57.155696 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.155734 kubelet[2379]: W0515 01:10:57.155702 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.155734 kubelet[2379]: E0515 01:10:57.155711 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.155818 kubelet[2379]: E0515 01:10:57.155808 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.155853 kubelet[2379]: W0515 01:10:57.155818 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.155853 kubelet[2379]: E0515 01:10:57.155834 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.155931 kubelet[2379]: E0515 01:10:57.155921 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.155931 kubelet[2379]: W0515 01:10:57.155929 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.155988 kubelet[2379]: E0515 01:10:57.155936 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.156032 kubelet[2379]: E0515 01:10:57.156021 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.156063 kubelet[2379]: W0515 01:10:57.156029 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.156063 kubelet[2379]: E0515 01:10:57.156039 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.156172 kubelet[2379]: E0515 01:10:57.156161 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.156172 kubelet[2379]: W0515 01:10:57.156169 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.156229 kubelet[2379]: E0515 01:10:57.156176 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.157126 kubelet[2379]: E0515 01:10:57.157064 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.157126 kubelet[2379]: W0515 01:10:57.157071 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.157126 kubelet[2379]: E0515 01:10:57.157081 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.157294 kubelet[2379]: E0515 01:10:57.157230 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.157294 kubelet[2379]: W0515 01:10:57.157249 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.157294 kubelet[2379]: E0515 01:10:57.157268 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.157439 kubelet[2379]: E0515 01:10:57.157396 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.157439 kubelet[2379]: W0515 01:10:57.157402 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.157439 kubelet[2379]: E0515 01:10:57.157415 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.157582 kubelet[2379]: E0515 01:10:57.157540 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.157582 kubelet[2379]: W0515 01:10:57.157545 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.157582 kubelet[2379]: E0515 01:10:57.157563 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.157758 kubelet[2379]: E0515 01:10:57.157684 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.157758 kubelet[2379]: W0515 01:10:57.157690 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.157758 kubelet[2379]: E0515 01:10:57.157698 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.157918 kubelet[2379]: E0515 01:10:57.157856 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.157918 kubelet[2379]: W0515 01:10:57.157862 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.157918 kubelet[2379]: E0515 01:10:57.157870 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.158075 kubelet[2379]: E0515 01:10:57.158037 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.158075 kubelet[2379]: W0515 01:10:57.158043 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.158075 kubelet[2379]: E0515 01:10:57.158052 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158143 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167358 kubelet[2379]: W0515 01:10:57.158151 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158161 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158271 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167358 kubelet[2379]: W0515 01:10:57.158275 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158283 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158397 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167358 kubelet[2379]: W0515 01:10:57.158402 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158408 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167358 kubelet[2379]: E0515 01:10:57.158508 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167532 env[1378]: time="2025-05-15T01:10:57.164641694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l85sc,Uid:189c4de8-e8f1-4114-a090-fa3fd31b5ca8,Namespace:calico-system,Attempt:0,}" May 15 01:10:57.167559 kubelet[2379]: W0515 01:10:57.158513 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.167559 kubelet[2379]: E0515 01:10:57.158518 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167559 kubelet[2379]: E0515 01:10:57.158604 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167559 kubelet[2379]: W0515 01:10:57.158608 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.167559 kubelet[2379]: E0515 01:10:57.158617 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167559 kubelet[2379]: E0515 01:10:57.158710 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167559 kubelet[2379]: W0515 01:10:57.158716 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.167559 kubelet[2379]: E0515 01:10:57.158722 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.167559 kubelet[2379]: E0515 01:10:57.158818 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.167559 kubelet[2379]: W0515 01:10:57.158822 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.158826 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.158913 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.181801 kubelet[2379]: W0515 01:10:57.158918 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.158922 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.158996 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.181801 kubelet[2379]: W0515 01:10:57.159000 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.159004 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.159076 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.181801 kubelet[2379]: W0515 01:10:57.159080 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.181801 kubelet[2379]: E0515 01:10:57.159085 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.181997 kubelet[2379]: E0515 01:10:57.166000 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.181997 kubelet[2379]: W0515 01:10:57.166007 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.181997 kubelet[2379]: E0515 01:10:57.166018 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.187136 kubelet[2379]: E0515 01:10:57.187088 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:10:57.187136 kubelet[2379]: W0515 01:10:57.187099 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:10:57.187136 kubelet[2379]: E0515 01:10:57.187111 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:10:57.225480 env[1378]: time="2025-05-15T01:10:57.225421391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:10:57.225612 env[1378]: time="2025-05-15T01:10:57.225493220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:10:57.225612 env[1378]: time="2025-05-15T01:10:57.225514528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:10:57.225775 env[1378]: time="2025-05-15T01:10:57.225739397Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7 pid=2818 runtime=io.containerd.runc.v2 May 15 01:10:57.263716 env[1378]: time="2025-05-15T01:10:57.261608237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l85sc,Uid:189c4de8-e8f1-4114-a090-fa3fd31b5ca8,Namespace:calico-system,Attempt:0,} returns sandbox id \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\"" May 15 01:10:57.477000 audit[2852]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:57.480125 kernel: kauditd_printk_skb: 155 callbacks suppressed May 15 01:10:57.480176 kernel: audit: type=1325 audit(1747271457.477:289): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:57.477000 audit[2852]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fffaef28aa0 a2=0 a3=7fffaef28a8c items=0 ppid=2516 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:57.487747 kernel: audit: type=1300 audit(1747271457.477:289): arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fffaef28aa0 a2=0 a3=7fffaef28a8c items=0 ppid=2516 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:57.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:57.490758 kernel: audit: type=1327 audit(1747271457.477:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:57.490810 kernel: audit: type=1325 audit(1747271457.487:290): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:57.487000 audit[2852]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:10:57.487000 audit[2852]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffaef28aa0 a2=0 a3=0 items=0 ppid=2516 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:57.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:57.500169 kernel: audit: type=1300 audit(1747271457.487:290): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffaef28aa0 a2=0 a3=0 items=0 ppid=2516 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:10:57.500213 kernel: audit: type=1327 audit(1747271457.487:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:10:58.735215 kubelet[2379]: E0515 01:10:58.735177 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:00.149901 env[1378]: time="2025-05-15T01:11:00.149861835Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:00.151002 env[1378]: time="2025-05-15T01:11:00.150978046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:00.151824 env[1378]: time="2025-05-15T01:11:00.151805693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:00.152856 env[1378]: time="2025-05-15T01:11:00.152838929Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:00.153396 env[1378]: time="2025-05-15T01:11:00.153373717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 01:11:00.154439 env[1378]: time="2025-05-15T01:11:00.154421748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 01:11:00.166745 env[1378]: time="2025-05-15T01:11:00.166724068Z" level=info msg="CreateContainer within sandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 01:11:00.202411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647734454.mount: Deactivated successfully. May 15 01:11:00.244815 env[1378]: time="2025-05-15T01:11:00.244789214Z" level=info msg="CreateContainer within sandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\"" May 15 01:11:00.245581 env[1378]: time="2025-05-15T01:11:00.245372516Z" level=info msg="StartContainer for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\"" May 15 01:11:00.309099 env[1378]: time="2025-05-15T01:11:00.308922227Z" level=info msg="StartContainer for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" returns successfully" May 15 01:11:00.734997 kubelet[2379]: E0515 01:11:00.734963 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:00.859355 kubelet[2379]: E0515 01:11:00.859339 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.859484 kubelet[2379]: W0515 01:11:00.859471 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.859552 kubelet[2379]: E0515 01:11:00.859540 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.859973 kubelet[2379]: E0515 01:11:00.859964 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.860045 kubelet[2379]: W0515 01:11:00.860034 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.860131 kubelet[2379]: E0515 01:11:00.860098 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.860333 kubelet[2379]: E0515 01:11:00.860325 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.860401 kubelet[2379]: W0515 01:11:00.860390 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.860462 kubelet[2379]: E0515 01:11:00.860452 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.860704 kubelet[2379]: E0515 01:11:00.860689 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.860769 kubelet[2379]: W0515 01:11:00.860759 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.860832 kubelet[2379]: E0515 01:11:00.860822 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.861450 kubelet[2379]: E0515 01:11:00.861439 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.861521 kubelet[2379]: W0515 01:11:00.861509 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.861585 kubelet[2379]: E0515 01:11:00.861575 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.861750 kubelet[2379]: E0515 01:11:00.861742 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.861816 kubelet[2379]: W0515 01:11:00.861805 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.861878 kubelet[2379]: E0515 01:11:00.861868 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.862025 kubelet[2379]: E0515 01:11:00.862017 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.862084 kubelet[2379]: W0515 01:11:00.862073 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.862148 kubelet[2379]: E0515 01:11:00.862138 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.862375 kubelet[2379]: E0515 01:11:00.862366 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.862445 kubelet[2379]: W0515 01:11:00.862434 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.862503 kubelet[2379]: E0515 01:11:00.862493 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.862658 kubelet[2379]: E0515 01:11:00.862651 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.862717 kubelet[2379]: W0515 01:11:00.862707 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.862781 kubelet[2379]: E0515 01:11:00.862771 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.862942 kubelet[2379]: E0515 01:11:00.862934 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.863004 kubelet[2379]: W0515 01:11:00.862993 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.863064 kubelet[2379]: E0515 01:11:00.863054 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.863209 kubelet[2379]: E0515 01:11:00.863202 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.863292 kubelet[2379]: W0515 01:11:00.863281 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.863354 kubelet[2379]: E0515 01:11:00.863343 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.863500 kubelet[2379]: E0515 01:11:00.863493 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.863561 kubelet[2379]: W0515 01:11:00.863551 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.863619 kubelet[2379]: E0515 01:11:00.863609 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.863769 kubelet[2379]: E0515 01:11:00.863762 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.863828 kubelet[2379]: W0515 01:11:00.863818 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.863891 kubelet[2379]: E0515 01:11:00.863881 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.864038 kubelet[2379]: E0515 01:11:00.864031 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.864098 kubelet[2379]: W0515 01:11:00.864088 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.864159 kubelet[2379]: E0515 01:11:00.864149 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.864321 kubelet[2379]: E0515 01:11:00.864312 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.864383 kubelet[2379]: W0515 01:11:00.864372 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.864445 kubelet[2379]: E0515 01:11:00.864435 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.883773 kubelet[2379]: E0515 01:11:00.883759 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.883874 kubelet[2379]: W0515 01:11:00.883863 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.883944 kubelet[2379]: E0515 01:11:00.883936 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884136 kubelet[2379]: E0515 01:11:00.884130 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884194 kubelet[2379]: W0515 01:11:00.884186 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.884251 kubelet[2379]: E0515 01:11:00.884244 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884352 kubelet[2379]: E0515 01:11:00.884340 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884352 kubelet[2379]: W0515 01:11:00.884350 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.884421 kubelet[2379]: E0515 01:11:00.884360 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884457 kubelet[2379]: E0515 01:11:00.884448 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884457 kubelet[2379]: W0515 01:11:00.884454 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.884509 kubelet[2379]: E0515 01:11:00.884466 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884547 kubelet[2379]: E0515 01:11:00.884538 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884547 kubelet[2379]: W0515 01:11:00.884544 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.884596 kubelet[2379]: E0515 01:11:00.884551 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884649 kubelet[2379]: E0515 01:11:00.884636 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884678 kubelet[2379]: W0515 01:11:00.884646 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.884678 kubelet[2379]: E0515 01:11:00.884657 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884873 kubelet[2379]: E0515 01:11:00.884863 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884873 kubelet[2379]: W0515 01:11:00.884871 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.884931 kubelet[2379]: E0515 01:11:00.884886 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.884985 kubelet[2379]: E0515 01:11:00.884975 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.884985 kubelet[2379]: W0515 01:11:00.884982 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885066 kubelet[2379]: E0515 01:11:00.884990 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885127 kubelet[2379]: E0515 01:11:00.885089 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885127 kubelet[2379]: W0515 01:11:00.885093 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885127 kubelet[2379]: E0515 01:11:00.885100 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885184 kubelet[2379]: E0515 01:11:00.885174 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885184 kubelet[2379]: W0515 01:11:00.885178 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885220 kubelet[2379]: E0515 01:11:00.885183 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885277 kubelet[2379]: E0515 01:11:00.885266 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885277 kubelet[2379]: W0515 01:11:00.885274 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885338 kubelet[2379]: E0515 01:11:00.885281 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885368 kubelet[2379]: E0515 01:11:00.885359 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885368 kubelet[2379]: W0515 01:11:00.885366 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885416 kubelet[2379]: E0515 01:11:00.885378 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885540 kubelet[2379]: E0515 01:11:00.885528 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885540 kubelet[2379]: W0515 01:11:00.885536 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885615 kubelet[2379]: E0515 01:11:00.885606 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885672 kubelet[2379]: E0515 01:11:00.885620 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885717 kubelet[2379]: W0515 01:11:00.885710 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.885771 kubelet[2379]: E0515 01:11:00.885764 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.885912 kubelet[2379]: E0515 01:11:00.885906 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.885964 kubelet[2379]: W0515 01:11:00.885957 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.886007 kubelet[2379]: E0515 01:11:00.886000 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.886145 kubelet[2379]: E0515 01:11:00.886139 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.886204 kubelet[2379]: W0515 01:11:00.886182 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.886268 kubelet[2379]: E0515 01:11:00.886260 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.886414 kubelet[2379]: E0515 01:11:00.886408 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.886471 kubelet[2379]: W0515 01:11:00.886464 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.886516 kubelet[2379]: E0515 01:11:00.886509 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.886806 kubelet[2379]: E0515 01:11:00.886799 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:00.886859 kubelet[2379]: W0515 01:11:00.886851 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:00.886901 kubelet[2379]: E0515 01:11:00.886894 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:00.956866 kubelet[2379]: I0515 01:11:00.956820 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f4f67cc77-5nb7d" podStartSLOduration=1.941504618 podStartE2EDuration="4.956808708s" podCreationTimestamp="2025-05-15 01:10:56 +0000 UTC" firstStartedPulling="2025-05-15 01:10:57.139035826 +0000 UTC m=+20.610918908" lastFinishedPulling="2025-05-15 01:11:00.154339912 +0000 UTC m=+23.626222998" observedRunningTime="2025-05-15 01:11:00.955478062 +0000 UTC m=+24.427361164" watchObservedRunningTime="2025-05-15 01:11:00.956808708 +0000 UTC m=+24.428691802" May 15 01:11:01.859663 kubelet[2379]: I0515 01:11:01.859310 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:01.870551 kubelet[2379]: E0515 01:11:01.870366 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.870551 kubelet[2379]: W0515 01:11:01.870380 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.870551 kubelet[2379]: E0515 01:11:01.870394 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.870551 kubelet[2379]: E0515 01:11:01.870501 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.870551 kubelet[2379]: W0515 01:11:01.870506 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.870551 kubelet[2379]: E0515 01:11:01.870511 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.870803 kubelet[2379]: E0515 01:11:01.870748 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.870803 kubelet[2379]: W0515 01:11:01.870753 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.870803 kubelet[2379]: E0515 01:11:01.870760 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.870954 kubelet[2379]: E0515 01:11:01.870902 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.870954 kubelet[2379]: W0515 01:11:01.870907 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.870954 kubelet[2379]: E0515 01:11:01.870912 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871046 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886415 kubelet[2379]: W0515 01:11:01.871051 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871056 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871140 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886415 kubelet[2379]: W0515 01:11:01.871146 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871152 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871229 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886415 kubelet[2379]: W0515 01:11:01.871244 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871249 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886415 kubelet[2379]: E0515 01:11:01.871337 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886626 kubelet[2379]: W0515 01:11:01.871341 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.886626 kubelet[2379]: E0515 01:11:01.871345 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886626 kubelet[2379]: E0515 01:11:01.871446 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886626 kubelet[2379]: W0515 01:11:01.871451 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.886626 kubelet[2379]: E0515 01:11:01.871455 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886626 kubelet[2379]: E0515 01:11:01.871540 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886626 kubelet[2379]: W0515 01:11:01.871544 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.886626 kubelet[2379]: E0515 01:11:01.871548 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.886626 kubelet[2379]: E0515 01:11:01.871626 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.886626 kubelet[2379]: W0515 01:11:01.871630 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871634 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871717 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.890909 kubelet[2379]: W0515 01:11:01.871721 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871726 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871816 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.890909 kubelet[2379]: W0515 01:11:01.871820 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871825 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871903 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.890909 kubelet[2379]: W0515 01:11:01.871907 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.890909 kubelet[2379]: E0515 01:11:01.871912 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.871993 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904329 kubelet[2379]: W0515 01:11:01.871997 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.872001 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.891319 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904329 kubelet[2379]: W0515 01:11:01.891328 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.891340 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.891447 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904329 kubelet[2379]: W0515 01:11:01.891458 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.891463 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904329 kubelet[2379]: E0515 01:11:01.891548 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904508 kubelet[2379]: W0515 01:11:01.891553 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904508 kubelet[2379]: E0515 01:11:01.891558 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904508 kubelet[2379]: E0515 01:11:01.891662 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904508 kubelet[2379]: W0515 01:11:01.891666 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904508 kubelet[2379]: E0515 01:11:01.891673 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904508 kubelet[2379]: E0515 01:11:01.891759 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904508 kubelet[2379]: W0515 01:11:01.891763 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904508 kubelet[2379]: E0515 01:11:01.891773 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904508 kubelet[2379]: E0515 01:11:01.891854 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904508 kubelet[2379]: W0515 01:11:01.891858 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.891864 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.891959 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904682 kubelet[2379]: W0515 01:11:01.891963 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.891974 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.892135 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904682 kubelet[2379]: W0515 01:11:01.892139 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.892146 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.892244 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904682 kubelet[2379]: W0515 01:11:01.892248 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904682 kubelet[2379]: E0515 01:11:01.892255 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892339 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904858 kubelet[2379]: W0515 01:11:01.892348 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892353 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892432 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904858 kubelet[2379]: W0515 01:11:01.892437 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892443 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892528 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.904858 kubelet[2379]: W0515 01:11:01.892532 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892537 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.904858 kubelet[2379]: E0515 01:11:01.892841 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.905029 kubelet[2379]: W0515 01:11:01.892846 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.905029 kubelet[2379]: E0515 01:11:01.892882 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.905029 kubelet[2379]: E0515 01:11:01.892924 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.905029 kubelet[2379]: W0515 01:11:01.892928 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.905029 kubelet[2379]: E0515 01:11:01.892933 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.905029 kubelet[2379]: E0515 01:11:01.893012 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.905029 kubelet[2379]: W0515 01:11:01.893016 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.905029 kubelet[2379]: E0515 01:11:01.893021 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.905029 kubelet[2379]: E0515 01:11:01.893116 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.905029 kubelet[2379]: W0515 01:11:01.893120 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.905200 kubelet[2379]: E0515 01:11:01.893125 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.905200 kubelet[2379]: E0515 01:11:01.893218 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.905200 kubelet[2379]: W0515 01:11:01.893222 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.905200 kubelet[2379]: E0515 01:11:01.893227 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.905200 kubelet[2379]: E0515 01:11:01.893417 2379 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 01:11:01.905200 kubelet[2379]: W0515 01:11:01.893422 2379 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 01:11:01.905200 kubelet[2379]: E0515 01:11:01.893427 2379 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 01:11:01.922375 env[1378]: time="2025-05-15T01:11:01.922346648Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:01.928524 env[1378]: time="2025-05-15T01:11:01.928493718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:01.931988 env[1378]: time="2025-05-15T01:11:01.931923945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:01.933106 env[1378]: time="2025-05-15T01:11:01.933089065Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:01.933415 env[1378]: time="2025-05-15T01:11:01.933396871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 01:11:01.934750 env[1378]: time="2025-05-15T01:11:01.934710959Z" level=info msg="CreateContainer within sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 01:11:01.964169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458664791.mount: Deactivated successfully. May 15 01:11:01.980783 env[1378]: time="2025-05-15T01:11:01.980749206Z" level=info msg="CreateContainer within sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\"" May 15 01:11:01.981142 env[1378]: time="2025-05-15T01:11:01.981124032Z" level=info msg="StartContainer for \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\"" May 15 01:11:02.030761 env[1378]: time="2025-05-15T01:11:02.030732327Z" level=info msg="StartContainer for \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\" returns successfully" May 15 01:11:02.159276 systemd[1]: run-containerd-runc-k8s.io-27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646-runc.ciOf2I.mount: Deactivated successfully. May 15 01:11:02.159381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646-rootfs.mount: Deactivated successfully. May 15 01:11:02.222059 env[1378]: time="2025-05-15T01:11:02.222019037Z" level=info msg="shim disconnected" id=27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646 May 15 01:11:02.222059 env[1378]: time="2025-05-15T01:11:02.222057057Z" level=warning msg="cleaning up after shim disconnected" id=27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646 namespace=k8s.io May 15 01:11:02.222227 env[1378]: time="2025-05-15T01:11:02.222065966Z" level=info msg="cleaning up dead shim" May 15 01:11:02.226881 env[1378]: time="2025-05-15T01:11:02.226856420Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3009 runtime=io.containerd.runc.v2\n" May 15 01:11:02.736564 kubelet[2379]: E0515 01:11:02.735540 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:02.862533 env[1378]: time="2025-05-15T01:11:02.862501704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 01:11:04.737112 kubelet[2379]: E0515 01:11:04.736786 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:06.578196 kubelet[2379]: I0515 01:11:06.577739 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:06.615574 kernel: audit: type=1325 audit(1747271466.608:291): table=filter:95 family=2 entries=17 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:06.615649 kernel: audit: type=1300 audit(1747271466.608:291): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffc161b750 a2=0 a3=7fffc161b73c items=0 ppid=2516 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:06.608000 audit[3031]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:06.608000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffc161b750 a2=0 a3=7fffc161b73c items=0 ppid=2516 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:06.620408 kernel: audit: type=1327 audit(1747271466.608:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:06.620458 kernel: audit: type=1325 audit(1747271466.615:292): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:06.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:06.615000 audit[3031]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:06.624430 kernel: audit: type=1300 audit(1747271466.615:292): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffc161b750 a2=0 a3=7fffc161b73c items=0 ppid=2516 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:06.615000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffc161b750 a2=0 a3=7fffc161b73c items=0 ppid=2516 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:06.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:06.629260 kernel: audit: type=1327 audit(1747271466.615:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:06.736138 kubelet[2379]: E0515 01:11:06.736112 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:07.849786 env[1378]: time="2025-05-15T01:11:07.849761269Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:07.863880 env[1378]: time="2025-05-15T01:11:07.863859799Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:07.869590 env[1378]: time="2025-05-15T01:11:07.869574418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:07.876059 env[1378]: time="2025-05-15T01:11:07.876041151Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:07.876358 env[1378]: time="2025-05-15T01:11:07.876339512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 01:11:07.878533 env[1378]: time="2025-05-15T01:11:07.878517317Z" level=info msg="CreateContainer within sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 01:11:07.918262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2132057332.mount: Deactivated successfully. May 15 01:11:08.020531 env[1378]: time="2025-05-15T01:11:08.020492148Z" level=info msg="CreateContainer within sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\"" May 15 01:11:08.020997 env[1378]: time="2025-05-15T01:11:08.020983820Z" level=info msg="StartContainer for \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\"" May 15 01:11:08.072843 env[1378]: time="2025-05-15T01:11:08.072819731Z" level=info msg="StartContainer for \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\" returns successfully" May 15 01:11:08.735530 kubelet[2379]: E0515 01:11:08.735493 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:09.215400 env[1378]: time="2025-05-15T01:11:09.215364476Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 01:11:09.227666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d-rootfs.mount: Deactivated successfully. May 15 01:11:09.232824 env[1378]: time="2025-05-15T01:11:09.232794643Z" level=info msg="shim disconnected" id=b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d May 15 01:11:09.232954 env[1378]: time="2025-05-15T01:11:09.232943094Z" level=warning msg="cleaning up after shim disconnected" id=b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d namespace=k8s.io May 15 01:11:09.233009 env[1378]: time="2025-05-15T01:11:09.233000349Z" level=info msg="cleaning up dead shim" May 15 01:11:09.238040 env[1378]: time="2025-05-15T01:11:09.238021967Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\n" May 15 01:11:09.271299 kubelet[2379]: I0515 01:11:09.270285 2379 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 01:11:09.291578 kubelet[2379]: I0515 01:11:09.289984 2379 topology_manager.go:215] "Topology Admit Handler" podUID="e99f5774-1995-4a8c-89ab-22334a39be19" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5wxnk" May 15 01:11:09.302287 kubelet[2379]: I0515 01:11:09.302253 2379 topology_manager.go:215] "Topology Admit Handler" podUID="a465e0d0-b1f5-4d68-a5ea-fce28821f59f" podNamespace="calico-apiserver" podName="calico-apiserver-68cd77bbfb-srwdw" May 15 01:11:09.302394 kubelet[2379]: I0515 01:11:09.302366 2379 topology_manager.go:215] "Topology Admit Handler" podUID="3b6acc2c-032c-40b8-85fc-0967a7269b9b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-frhnb" May 15 01:11:09.302439 kubelet[2379]: I0515 01:11:09.302425 2379 topology_manager.go:215] "Topology Admit Handler" podUID="ab75af42-a771-4175-8a6f-81471c06a1c4" podNamespace="calico-apiserver" podName="calico-apiserver-68cd77bbfb-stq89" May 15 01:11:09.302492 kubelet[2379]: I0515 01:11:09.302480 2379 topology_manager.go:215] "Topology Admit Handler" podUID="285861ef-806b-4ef3-881e-bc2ab411454c" podNamespace="calico-system" podName="calico-kube-controllers-77f75b95c-2fjvp" May 15 01:11:09.302552 kubelet[2379]: I0515 01:11:09.302541 2379 topology_manager.go:215] "Topology Admit Handler" podUID="9ec7ac22-58ee-4832-ade2-3b509e93036d" podNamespace="calico-apiserver" podName="calico-apiserver-554b8879-bsbbc" May 15 01:11:09.445213 kubelet[2379]: I0515 01:11:09.445189 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5ng\" (UniqueName: \"kubernetes.io/projected/3b6acc2c-032c-40b8-85fc-0967a7269b9b-kube-api-access-7r5ng\") pod \"coredns-7db6d8ff4d-frhnb\" (UID: \"3b6acc2c-032c-40b8-85fc-0967a7269b9b\") " pod="kube-system/coredns-7db6d8ff4d-frhnb" May 15 01:11:09.445399 kubelet[2379]: I0515 01:11:09.445386 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8qg8\" (UniqueName: \"kubernetes.io/projected/ab75af42-a771-4175-8a6f-81471c06a1c4-kube-api-access-m8qg8\") pod \"calico-apiserver-68cd77bbfb-stq89\" (UID: \"ab75af42-a771-4175-8a6f-81471c06a1c4\") " pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" May 15 01:11:09.445476 kubelet[2379]: I0515 01:11:09.445466 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e99f5774-1995-4a8c-89ab-22334a39be19-config-volume\") pod \"coredns-7db6d8ff4d-5wxnk\" (UID: \"e99f5774-1995-4a8c-89ab-22334a39be19\") " pod="kube-system/coredns-7db6d8ff4d-5wxnk" May 15 01:11:09.445561 kubelet[2379]: I0515 01:11:09.445552 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-calico-apiserver-certs\") pod \"calico-apiserver-68cd77bbfb-srwdw\" (UID: \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\") " pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" May 15 01:11:09.445653 kubelet[2379]: I0515 01:11:09.445644 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4xz2\" (UniqueName: \"kubernetes.io/projected/285861ef-806b-4ef3-881e-bc2ab411454c-kube-api-access-r4xz2\") pod \"calico-kube-controllers-77f75b95c-2fjvp\" (UID: \"285861ef-806b-4ef3-881e-bc2ab411454c\") " pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" May 15 01:11:09.445733 kubelet[2379]: I0515 01:11:09.445710 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfj2\" (UniqueName: \"kubernetes.io/projected/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-kube-api-access-8pfj2\") pod \"calico-apiserver-68cd77bbfb-srwdw\" (UID: \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\") " pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" May 15 01:11:09.445796 kubelet[2379]: I0515 01:11:09.445788 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab75af42-a771-4175-8a6f-81471c06a1c4-calico-apiserver-certs\") pod \"calico-apiserver-68cd77bbfb-stq89\" (UID: \"ab75af42-a771-4175-8a6f-81471c06a1c4\") " pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" May 15 01:11:09.445857 kubelet[2379]: I0515 01:11:09.445850 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnnc\" (UniqueName: \"kubernetes.io/projected/e99f5774-1995-4a8c-89ab-22334a39be19-kube-api-access-ffnnc\") pod \"coredns-7db6d8ff4d-5wxnk\" (UID: \"e99f5774-1995-4a8c-89ab-22334a39be19\") " pod="kube-system/coredns-7db6d8ff4d-5wxnk" May 15 01:11:09.446500 kubelet[2379]: I0515 01:11:09.445967 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285861ef-806b-4ef3-881e-bc2ab411454c-tigera-ca-bundle\") pod \"calico-kube-controllers-77f75b95c-2fjvp\" (UID: \"285861ef-806b-4ef3-881e-bc2ab411454c\") " pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" May 15 01:11:09.446500 kubelet[2379]: I0515 01:11:09.445981 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b6acc2c-032c-40b8-85fc-0967a7269b9b-config-volume\") pod \"coredns-7db6d8ff4d-frhnb\" (UID: \"3b6acc2c-032c-40b8-85fc-0967a7269b9b\") " pod="kube-system/coredns-7db6d8ff4d-frhnb" May 15 01:11:09.446500 kubelet[2379]: I0515 01:11:09.445991 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtf6c\" (UniqueName: \"kubernetes.io/projected/9ec7ac22-58ee-4832-ade2-3b509e93036d-kube-api-access-dtf6c\") pod \"calico-apiserver-554b8879-bsbbc\" (UID: \"9ec7ac22-58ee-4832-ade2-3b509e93036d\") " pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" May 15 01:11:09.446500 kubelet[2379]: I0515 01:11:09.446002 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ec7ac22-58ee-4832-ade2-3b509e93036d-calico-apiserver-certs\") pod \"calico-apiserver-554b8879-bsbbc\" (UID: \"9ec7ac22-58ee-4832-ade2-3b509e93036d\") " pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" May 15 01:11:09.620281 env[1378]: time="2025-05-15T01:11:09.619488655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b8879-bsbbc,Uid:9ec7ac22-58ee-4832-ade2-3b509e93036d,Namespace:calico-apiserver,Attempt:0,}" May 15 01:11:09.626786 env[1378]: time="2025-05-15T01:11:09.626767750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-srwdw,Uid:a465e0d0-b1f5-4d68-a5ea-fce28821f59f,Namespace:calico-apiserver,Attempt:0,}" May 15 01:11:09.630008 env[1378]: time="2025-05-15T01:11:09.629993493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f75b95c-2fjvp,Uid:285861ef-806b-4ef3-881e-bc2ab411454c,Namespace:calico-system,Attempt:0,}" May 15 01:11:09.630206 env[1378]: time="2025-05-15T01:11:09.630193515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-stq89,Uid:ab75af42-a771-4175-8a6f-81471c06a1c4,Namespace:calico-apiserver,Attempt:0,}" May 15 01:11:09.633681 env[1378]: time="2025-05-15T01:11:09.633666677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wxnk,Uid:e99f5774-1995-4a8c-89ab-22334a39be19,Namespace:kube-system,Attempt:0,}" May 15 01:11:09.633896 env[1378]: time="2025-05-15T01:11:09.633883750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frhnb,Uid:3b6acc2c-032c-40b8-85fc-0967a7269b9b,Namespace:kube-system,Attempt:0,}" May 15 01:11:09.887847 env[1378]: time="2025-05-15T01:11:09.887630765Z" level=error msg="Failed to destroy network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.888220 env[1378]: time="2025-05-15T01:11:09.888201791Z" level=error msg="encountered an error cleaning up failed sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.888310 env[1378]: time="2025-05-15T01:11:09.888293898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b8879-bsbbc,Uid:9ec7ac22-58ee-4832-ade2-3b509e93036d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.890295 env[1378]: time="2025-05-15T01:11:09.890261135Z" level=error msg="Failed to destroy network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.891278 env[1378]: time="2025-05-15T01:11:09.890481759Z" level=error msg="encountered an error cleaning up failed sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.891278 env[1378]: time="2025-05-15T01:11:09.890520276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frhnb,Uid:3b6acc2c-032c-40b8-85fc-0967a7269b9b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.893443 kubelet[2379]: E0515 01:11:09.890634 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.893759 kubelet[2379]: E0515 01:11:09.891478 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.893933 env[1378]: time="2025-05-15T01:11:09.893890874Z" level=error msg="Failed to destroy network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.894928 env[1378]: time="2025-05-15T01:11:09.894907327Z" level=error msg="encountered an error cleaning up failed sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.895321 kubelet[2379]: E0515 01:11:09.895229 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" May 15 01:11:09.895321 kubelet[2379]: E0515 01:11:09.895227 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frhnb" May 15 01:11:09.895321 kubelet[2379]: E0515 01:11:09.895259 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" May 15 01:11:09.895321 kubelet[2379]: E0515 01:11:09.895268 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frhnb" May 15 01:11:09.895435 kubelet[2379]: E0515 01:11:09.895289 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-554b8879-bsbbc_calico-apiserver(9ec7ac22-58ee-4832-ade2-3b509e93036d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-554b8879-bsbbc_calico-apiserver(9ec7ac22-58ee-4832-ade2-3b509e93036d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" podUID="9ec7ac22-58ee-4832-ade2-3b509e93036d" May 15 01:11:09.895435 kubelet[2379]: E0515 01:11:09.895289 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frhnb_kube-system(3b6acc2c-032c-40b8-85fc-0967a7269b9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frhnb_kube-system(3b6acc2c-032c-40b8-85fc-0967a7269b9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frhnb" podUID="3b6acc2c-032c-40b8-85fc-0967a7269b9b" May 15 01:11:09.897134 kubelet[2379]: E0515 01:11:09.896519 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.897134 kubelet[2379]: E0515 01:11:09.896560 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5wxnk" May 15 01:11:09.897134 kubelet[2379]: E0515 01:11:09.896573 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5wxnk" May 15 01:11:09.898214 env[1378]: time="2025-05-15T01:11:09.896281451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wxnk,Uid:e99f5774-1995-4a8c-89ab-22334a39be19,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.898357 kubelet[2379]: E0515 01:11:09.896595 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5wxnk_kube-system(e99f5774-1995-4a8c-89ab-22334a39be19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5wxnk_kube-system(e99f5774-1995-4a8c-89ab-22334a39be19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5wxnk" podUID="e99f5774-1995-4a8c-89ab-22334a39be19" May 15 01:11:09.901042 env[1378]: time="2025-05-15T01:11:09.901009291Z" level=error msg="Failed to destroy network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.901395 env[1378]: time="2025-05-15T01:11:09.901375243Z" level=error msg="encountered an error cleaning up failed sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.901482 env[1378]: time="2025-05-15T01:11:09.901463975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-stq89,Uid:ab75af42-a771-4175-8a6f-81471c06a1c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.901684 kubelet[2379]: E0515 01:11:09.901656 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.901728 kubelet[2379]: E0515 01:11:09.901690 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" May 15 01:11:09.901728 kubelet[2379]: E0515 01:11:09.901705 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" May 15 01:11:09.901775 kubelet[2379]: E0515 01:11:09.901733 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cd77bbfb-stq89_calico-apiserver(ab75af42-a771-4175-8a6f-81471c06a1c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cd77bbfb-stq89_calico-apiserver(ab75af42-a771-4175-8a6f-81471c06a1c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" podUID="ab75af42-a771-4175-8a6f-81471c06a1c4" May 15 01:11:09.907156 env[1378]: time="2025-05-15T01:11:09.907121577Z" level=error msg="Failed to destroy network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.907499 env[1378]: time="2025-05-15T01:11:09.907480824Z" level=error msg="encountered an error cleaning up failed sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.907577 env[1378]: time="2025-05-15T01:11:09.907559298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-srwdw,Uid:a465e0d0-b1f5-4d68-a5ea-fce28821f59f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.907819 kubelet[2379]: E0515 01:11:09.907789 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.908399 kubelet[2379]: E0515 01:11:09.907831 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" May 15 01:11:09.908399 kubelet[2379]: E0515 01:11:09.907844 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" May 15 01:11:09.908399 kubelet[2379]: E0515 01:11:09.907871 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cd77bbfb-srwdw_calico-apiserver(a465e0d0-b1f5-4d68-a5ea-fce28821f59f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cd77bbfb-srwdw_calico-apiserver(a465e0d0-b1f5-4d68-a5ea-fce28821f59f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" podUID="a465e0d0-b1f5-4d68-a5ea-fce28821f59f" May 15 01:11:09.916944 kubelet[2379]: I0515 01:11:09.915454 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:09.916944 kubelet[2379]: I0515 01:11:09.916259 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:09.917180 env[1378]: time="2025-05-15T01:11:09.917161259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 01:11:09.923662 env[1378]: time="2025-05-15T01:11:09.923610792Z" level=error msg="Failed to destroy network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.925162 env[1378]: time="2025-05-15T01:11:09.924121882Z" level=error msg="encountered an error cleaning up failed sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.925162 env[1378]: time="2025-05-15T01:11:09.924149450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f75b95c-2fjvp,Uid:285861ef-806b-4ef3-881e-bc2ab411454c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.926791 kubelet[2379]: E0515 01:11:09.926478 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.926791 kubelet[2379]: E0515 01:11:09.926511 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" May 15 01:11:09.926791 kubelet[2379]: E0515 01:11:09.926525 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" May 15 01:11:09.928567 kubelet[2379]: E0515 01:11:09.926549 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77f75b95c-2fjvp_calico-system(285861ef-806b-4ef3-881e-bc2ab411454c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77f75b95c-2fjvp_calico-system(285861ef-806b-4ef3-881e-bc2ab411454c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" podUID="285861ef-806b-4ef3-881e-bc2ab411454c" May 15 01:11:09.929501 kubelet[2379]: I0515 01:11:09.929303 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:09.930628 kubelet[2379]: I0515 01:11:09.930270 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:09.930842 kubelet[2379]: I0515 01:11:09.930825 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:09.936658 env[1378]: time="2025-05-15T01:11:09.936627793Z" level=info msg="StopPodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\"" May 15 01:11:09.936915 env[1378]: time="2025-05-15T01:11:09.936899417Z" level=info msg="StopPodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\"" May 15 01:11:09.937083 env[1378]: time="2025-05-15T01:11:09.937066277Z" level=info msg="StopPodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\"" May 15 01:11:09.937459 env[1378]: time="2025-05-15T01:11:09.937430908Z" level=info msg="StopPodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\"" May 15 01:11:09.937669 env[1378]: time="2025-05-15T01:11:09.937657265Z" level=info msg="StopPodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\"" May 15 01:11:09.979477 env[1378]: time="2025-05-15T01:11:09.979426229Z" level=error msg="StopPodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" failed" error="failed to destroy network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.979989 kubelet[2379]: E0515 01:11:09.979820 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:09.979989 kubelet[2379]: E0515 01:11:09.979878 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63"} May 15 01:11:09.979989 kubelet[2379]: E0515 01:11:09.979941 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e99f5774-1995-4a8c-89ab-22334a39be19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:09.979989 kubelet[2379]: E0515 01:11:09.979961 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e99f5774-1995-4a8c-89ab-22334a39be19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5wxnk" podUID="e99f5774-1995-4a8c-89ab-22334a39be19" May 15 01:11:09.991810 env[1378]: time="2025-05-15T01:11:09.991769234Z" level=error msg="StopPodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" failed" error="failed to destroy network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:09.992053 kubelet[2379]: E0515 01:11:09.991992 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:09.992053 kubelet[2379]: E0515 01:11:09.992046 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83"} May 15 01:11:09.992142 kubelet[2379]: E0515 01:11:09.992070 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:09.992142 kubelet[2379]: E0515 01:11:09.992095 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" podUID="a465e0d0-b1f5-4d68-a5ea-fce28821f59f" May 15 01:11:10.005331 env[1378]: time="2025-05-15T01:11:10.005285489Z" level=error msg="StopPodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" failed" error="failed to destroy network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.005614 kubelet[2379]: E0515 01:11:10.005591 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:10.005708 kubelet[2379]: E0515 01:11:10.005623 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2"} May 15 01:11:10.005708 kubelet[2379]: E0515 01:11:10.005653 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b6acc2c-032c-40b8-85fc-0967a7269b9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:10.005708 kubelet[2379]: E0515 01:11:10.005681 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b6acc2c-032c-40b8-85fc-0967a7269b9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frhnb" podUID="3b6acc2c-032c-40b8-85fc-0967a7269b9b" May 15 01:11:10.006340 env[1378]: time="2025-05-15T01:11:10.006317062Z" level=error msg="StopPodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" failed" error="failed to destroy network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.006463 kubelet[2379]: E0515 01:11:10.006436 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:10.006496 kubelet[2379]: E0515 01:11:10.006466 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28"} May 15 01:11:10.006496 kubelet[2379]: E0515 01:11:10.006483 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ec7ac22-58ee-4832-ade2-3b509e93036d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:10.006559 kubelet[2379]: E0515 01:11:10.006497 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ec7ac22-58ee-4832-ade2-3b509e93036d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" podUID="9ec7ac22-58ee-4832-ade2-3b509e93036d" May 15 01:11:10.007070 env[1378]: time="2025-05-15T01:11:10.007044266Z" level=error msg="StopPodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" failed" error="failed to destroy network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.007220 kubelet[2379]: E0515 01:11:10.007160 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:10.007220 kubelet[2379]: E0515 01:11:10.007180 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb"} May 15 01:11:10.007220 kubelet[2379]: E0515 01:11:10.007195 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab75af42-a771-4175-8a6f-81471c06a1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:10.007220 kubelet[2379]: E0515 01:11:10.007206 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab75af42-a771-4175-8a6f-81471c06a1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" podUID="ab75af42-a771-4175-8a6f-81471c06a1c4" May 15 01:11:10.738454 env[1378]: time="2025-05-15T01:11:10.738317674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxhjw,Uid:9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5,Namespace:calico-system,Attempt:0,}" May 15 01:11:10.779564 env[1378]: time="2025-05-15T01:11:10.779533369Z" level=error msg="Failed to destroy network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.781225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179-shm.mount: Deactivated successfully. May 15 01:11:10.782274 env[1378]: time="2025-05-15T01:11:10.782252305Z" level=error msg="encountered an error cleaning up failed sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.782387 env[1378]: time="2025-05-15T01:11:10.782371869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxhjw,Uid:9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.782571 kubelet[2379]: E0515 01:11:10.782542 2379 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.782612 kubelet[2379]: E0515 01:11:10.782583 2379 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxhjw" May 15 01:11:10.782612 kubelet[2379]: E0515 01:11:10.782604 2379 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxhjw" May 15 01:11:10.782711 kubelet[2379]: E0515 01:11:10.782633 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxhjw_calico-system(9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxhjw_calico-system(9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:10.936651 kubelet[2379]: I0515 01:11:10.936629 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:10.937378 env[1378]: time="2025-05-15T01:11:10.937356611Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" May 15 01:11:10.938656 kubelet[2379]: I0515 01:11:10.938639 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:10.939126 env[1378]: time="2025-05-15T01:11:10.939107994Z" level=info msg="StopPodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\"" May 15 01:11:10.960095 env[1378]: time="2025-05-15T01:11:10.960054337Z" level=error msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" failed" error="failed to destroy network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.960387 kubelet[2379]: E0515 01:11:10.960362 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:10.960438 kubelet[2379]: E0515 01:11:10.960395 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a"} May 15 01:11:10.960438 kubelet[2379]: E0515 01:11:10.960416 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"285861ef-806b-4ef3-881e-bc2ab411454c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:10.960533 kubelet[2379]: E0515 01:11:10.960442 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"285861ef-806b-4ef3-881e-bc2ab411454c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" podUID="285861ef-806b-4ef3-881e-bc2ab411454c" May 15 01:11:10.962994 env[1378]: time="2025-05-15T01:11:10.962964240Z" level=error msg="StopPodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" failed" error="failed to destroy network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 01:11:10.963090 kubelet[2379]: E0515 01:11:10.963070 2379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:10.963128 kubelet[2379]: E0515 01:11:10.963094 2379 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179"} May 15 01:11:10.963128 kubelet[2379]: E0515 01:11:10.963120 2379 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 01:11:10.963195 kubelet[2379]: E0515 01:11:10.963131 2379 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxhjw" podUID="9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5" May 15 01:11:17.159757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790290778.mount: Deactivated successfully. May 15 01:11:17.198762 env[1378]: time="2025-05-15T01:11:17.198716583Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:17.200294 env[1378]: time="2025-05-15T01:11:17.200280052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:17.200977 env[1378]: time="2025-05-15T01:11:17.200956561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:17.202759 env[1378]: time="2025-05-15T01:11:17.202554143Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:17.202790 env[1378]: time="2025-05-15T01:11:17.202761529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 01:11:17.221900 env[1378]: time="2025-05-15T01:11:17.221868155Z" level=info msg="CreateContainer within sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 01:11:17.231032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085388029.mount: Deactivated successfully. May 15 01:11:17.235709 env[1378]: time="2025-05-15T01:11:17.235681589Z" level=info msg="CreateContainer within sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\"" May 15 01:11:17.236381 env[1378]: time="2025-05-15T01:11:17.236244086Z" level=info msg="StartContainer for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\"" May 15 01:11:17.276330 env[1378]: time="2025-05-15T01:11:17.276307139Z" level=info msg="StartContainer for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" returns successfully" May 15 01:11:17.977956 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 01:11:17.978724 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 01:11:18.989541 kubelet[2379]: I0515 01:11:18.989479 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:19.307145 kernel: audit: type=1400 audit(1747271479.299:293): avc: denied { write } for pid=3587 comm="tee" name="fd" dev="proc" ino=35511 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.308145 kernel: audit: type=1300 audit(1747271479.299:293): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe52ccea22 a2=241 a3=1b6 items=1 ppid=3542 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.299000 audit[3587]: AVC avc: denied { write } for pid=3587 comm="tee" name="fd" dev="proc" ino=35511 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.337675 kernel: audit: type=1307 audit(1747271479.299:293): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 15 01:11:19.337739 kernel: audit: type=1302 audit(1747271479.299:293): item=0 name="/dev/fd/63" inode=35497 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.337768 kernel: audit: type=1327 audit(1747271479.299:293): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.338603 kernel: audit: type=1400 audit(1747271479.300:294): avc: denied { write } for pid=3596 comm="tee" name="fd" dev="proc" ino=35515 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.338633 kernel: audit: type=1300 audit(1747271479.300:294): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc1e990a34 a2=241 a3=1b6 items=1 ppid=3538 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.338654 kernel: audit: type=1307 audit(1747271479.300:294): cwd="/etc/service/enabled/cni/log" May 15 01:11:19.338678 kernel: audit: type=1302 audit(1747271479.300:294): item=0 name="/dev/fd/63" inode=36512 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.339682 kernel: audit: type=1327 audit(1747271479.300:294): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.299000 audit[3587]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe52ccea22 a2=241 a3=1b6 items=1 ppid=3542 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.299000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 15 01:11:19.299000 audit: PATH item=0 name="/dev/fd/63" inode=35497 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.300000 audit[3596]: AVC avc: denied { write } for pid=3596 comm="tee" name="fd" dev="proc" ino=35515 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.300000 audit[3596]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc1e990a34 a2=241 a3=1b6 items=1 ppid=3538 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.300000 audit: CWD cwd="/etc/service/enabled/cni/log" May 15 01:11:19.300000 audit: PATH item=0 name="/dev/fd/63" inode=36512 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.300000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.301000 audit[3598]: AVC avc: denied { write } for pid=3598 comm="tee" name="fd" dev="proc" ino=35519 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.301000 audit[3598]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe40a89a32 a2=241 a3=1b6 items=1 ppid=3539 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.301000 audit: CWD cwd="/etc/service/enabled/felix/log" May 15 01:11:19.301000 audit: PATH item=0 name="/dev/fd/63" inode=35504 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.302000 audit[3585]: AVC avc: denied { write } for pid=3585 comm="tee" name="fd" dev="proc" ino=35523 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.302000 audit[3585]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff5648da32 a2=241 a3=1b6 items=1 ppid=3548 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.302000 audit: CWD cwd="/etc/service/enabled/confd/log" May 15 01:11:19.302000 audit: PATH item=0 name="/dev/fd/63" inode=36505 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.317000 audit[3609]: AVC avc: denied { write } for pid=3609 comm="tee" name="fd" dev="proc" ino=35542 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.317000 audit[3609]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff7687a32 a2=241 a3=1b6 items=1 ppid=3550 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.317000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 15 01:11:19.317000 audit: PATH item=0 name="/dev/fd/63" inode=35529 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.324000 audit[3594]: AVC avc: denied { write } for pid=3594 comm="tee" name="fd" dev="proc" ino=36521 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.324000 audit[3594]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc30c5ca33 a2=241 a3=1b6 items=1 ppid=3544 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.324000 audit: CWD cwd="/etc/service/enabled/bird/log" May 15 01:11:19.324000 audit: PATH item=0 name="/dev/fd/63" inode=36511 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.327000 audit[3615]: AVC avc: denied { write } for pid=3615 comm="tee" name="fd" dev="proc" ino=36525 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:19.327000 audit[3615]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb3dd5a23 a2=241 a3=1b6 items=1 ppid=3558 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.327000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 15 01:11:19.327000 audit: PATH item=0 name="/dev/fd/63" inode=35539 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:19.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.451000 audit: BPF prog-id=10 op=LOAD May 15 01:11:19.451000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb100c830 a2=98 a3=3 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.451000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.451000 audit: BPF prog-id=10 op=UNLOAD May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit: BPF prog-id=11 op=LOAD May 15 01:11:19.463000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb100c610 a2=74 a3=540051 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.463000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.463000 audit: BPF prog-id=11 op=UNLOAD May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.463000 audit: BPF prog-id=12 op=LOAD May 15 01:11:19.463000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb100c640 a2=94 a3=2 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.463000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.464000 audit: BPF prog-id=12 op=UNLOAD May 15 01:11:19.562000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit: BPF prog-id=13 op=LOAD May 15 01:11:19.562000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb100c500 a2=40 a3=1 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.562000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.562000 audit: BPF prog-id=13 op=UNLOAD May 15 01:11:19.562000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.562000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcb100c5d0 a2=50 a3=7ffcb100c6b0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.562000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb100c510 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb100c540 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb100c450 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb100c560 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb100c540 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb100c530 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb100c560 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb100c540 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb100c560 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb100c530 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb100c5a0 a2=28 a3=0 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcb100c350 a2=50 a3=1 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit: BPF prog-id=14 op=LOAD May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb100c350 a2=94 a3=5 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit: BPF prog-id=14 op=UNLOAD May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcb100c400 a2=50 a3=1 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcb100c520 a2=4 a3=38 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.569000 audit[3634]: AVC avc: denied { confidentiality } for pid=3634 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:19.569000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb100c570 a2=94 a3=6 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { confidentiality } for pid=3634 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:19.570000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb100bd20 a2=94 a3=83 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.570000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb100bd20 a2=94 a3=83 items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit: BPF prog-id=15 op=LOAD May 15 01:11:19.618000 audit[3656]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3ec754f0 a2=98 a3=1999999999999999 items=0 ppid=3541 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.618000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 01:11:19.618000 audit: BPF prog-id=15 op=UNLOAD May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit: BPF prog-id=16 op=LOAD May 15 01:11:19.618000 audit[3656]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3ec753d0 a2=74 a3=ffff items=0 ppid=3541 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.618000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 01:11:19.618000 audit: BPF prog-id=16 op=UNLOAD May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { perfmon } for pid=3656 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit[3656]: AVC avc: denied { bpf } for pid=3656 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.618000 audit: BPF prog-id=17 op=LOAD May 15 01:11:19.618000 audit[3656]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3ec75410 a2=40 a3=7ffc3ec755f0 items=0 ppid=3541 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.618000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 01:11:19.618000 audit: BPF prog-id=17 op=UNLOAD May 15 01:11:19.712000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.712000 audit: BPF prog-id=18 op=LOAD May 15 01:11:19.712000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa6c85010 a2=98 a3=ffffffff items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.712000 audit: BPF prog-id=18 op=UNLOAD May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit: BPF prog-id=19 op=LOAD May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa6c84e20 a2=74 a3=540051 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit: BPF prog-id=19 op=UNLOAD May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit: BPF prog-id=20 op=LOAD May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa6c84e50 a2=94 a3=2 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit: BPF prog-id=20 op=UNLOAD May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fffa6c84d20 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa6c84d50 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa6c84c60 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fffa6c84d70 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fffa6c84d50 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fffa6c84d40 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fffa6c84d70 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa6c84d50 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa6c84d70 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa6c84d40 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fffa6c84db0 a2=28 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit: BPF prog-id=21 op=LOAD May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffa6c84c20 a2=40 a3=0 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit: BPF prog-id=21 op=UNLOAD May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fffa6c84c10 a2=50 a3=2800 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fffa6c84c10 a2=50 a3=2800 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit: BPF prog-id=22 op=LOAD May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffa6c84430 a2=94 a3=2 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.716000 audit: BPF prog-id=22 op=UNLOAD May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { perfmon } for pid=3683 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit[3683]: AVC avc: denied { bpf } for pid=3683 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.716000 audit: BPF prog-id=23 op=LOAD May 15 01:11:19.716000 audit[3683]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffa6c84530 a2=94 a3=30 items=0 ppid=3541 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.716000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit: BPF prog-id=24 op=LOAD May 15 01:11:19.720000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc13674f0 a2=98 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.720000 audit: BPF prog-id=24 op=UNLOAD May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit: BPF prog-id=25 op=LOAD May 15 01:11:19.720000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc13672d0 a2=74 a3=540051 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.720000 audit: BPF prog-id=25 op=UNLOAD May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.720000 audit: BPF prog-id=26 op=LOAD May 15 01:11:19.720000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc1367300 a2=94 a3=2 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.720000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.720000 audit: BPF prog-id=26 op=UNLOAD May 15 01:11:19.742074 systemd-networkd[1141]: vxlan.calico: Link UP May 15 01:11:19.742078 systemd-networkd[1141]: vxlan.calico: Gained carrier May 15 01:11:19.795000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit: BPF prog-id=27 op=LOAD May 15 01:11:19.795000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc13671c0 a2=40 a3=1 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.795000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.795000 audit: BPF prog-id=27 op=UNLOAD May 15 01:11:19.795000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.795000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdc1367290 a2=50 a3=7ffdc1367370 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.795000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc13671d0 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc1367200 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc1367110 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc1367220 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc1367200 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc13671f0 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc1367220 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.803000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.803000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc1367200 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc1367220 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdc13671f0 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdc1367260 a2=28 a3=0 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdc1367010 a2=50 a3=1 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit: BPF prog-id=28 op=LOAD May 15 01:11:19.804000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc1367010 a2=94 a3=5 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.804000 audit: BPF prog-id=28 op=UNLOAD May 15 01:11:19.804000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.804000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdc13670c0 a2=50 a3=1 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.804000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffdc13671e0 a2=4 a3=38 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.805000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { confidentiality } for pid=3685 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:19.805000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdc1367230 a2=94 a3=6 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.805000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.805000 audit[3685]: AVC avc: denied { confidentiality } for pid=3685 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:19.805000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdc13669e0 a2=94 a3=83 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.805000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { perfmon } for pid=3685 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { confidentiality } for pid=3685 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:19.806000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdc13669e0 a2=94 a3=83 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc1368420 a2=10 a3=f1f00800 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc13682c0 a2=10 a3=3 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc1368260 a2=10 a3=3 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.806000 audit[3685]: AVC avc: denied { bpf } for pid=3685 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:19.806000 audit[3685]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffdc1368260 a2=10 a3=7 items=0 ppid=3541 pid=3685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.806000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:19.811000 audit: BPF prog-id=23 op=UNLOAD May 15 01:11:19.999000 audit[3714]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3714 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:19.999000 audit[3714]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffcc7572b0 a2=0 a3=7fffcc75729c items=0 ppid=3541 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:19.999000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:20.020000 audit[3712]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:20.020000 audit[3712]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffefdc45f20 a2=0 a3=7ffefdc45f0c items=0 ppid=3541 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:20.020000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:20.024000 audit[3715]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:20.024000 audit[3715]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffc6f9ee8c0 a2=0 a3=7ffc6f9ee8ac items=0 ppid=3541 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:20.024000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:20.034000 audit[3711]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=3711 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:20.034000 audit[3711]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe462c5160 a2=0 a3=7ffe462c514c items=0 ppid=3541 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:20.034000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:20.736911 env[1378]: time="2025-05-15T01:11:20.736721834Z" level=info msg="StopPodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\"" May 15 01:11:20.736911 env[1378]: time="2025-05-15T01:11:20.736838155Z" level=info msg="StopPodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\"" May 15 01:11:20.921250 kubelet[2379]: I0515 01:11:20.805734 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l85sc" podStartSLOduration=4.861683915 podStartE2EDuration="24.800408163s" podCreationTimestamp="2025-05-15 01:10:56 +0000 UTC" firstStartedPulling="2025-05-15 01:10:57.264493173 +0000 UTC m=+20.736376258" lastFinishedPulling="2025-05-15 01:11:17.203217422 +0000 UTC m=+40.675100506" observedRunningTime="2025-05-15 01:11:17.99807428 +0000 UTC m=+41.469957367" watchObservedRunningTime="2025-05-15 01:11:20.800408163 +0000 UTC m=+44.272291252" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:20.798 [INFO][3766] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:20.799 [INFO][3766] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" iface="eth0" netns="/var/run/netns/cni-842cbd78-896e-f011-2c36-ec08a869806e" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:20.799 [INFO][3766] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" iface="eth0" netns="/var/run/netns/cni-842cbd78-896e-f011-2c36-ec08a869806e" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:20.801 [INFO][3766] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" iface="eth0" netns="/var/run/netns/cni-842cbd78-896e-f011-2c36-ec08a869806e" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:20.801 [INFO][3766] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:20.801 [INFO][3766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.135 [INFO][3779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.137 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.138 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.148 [WARNING][3779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.148 [INFO][3779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.148 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:21.153516 env[1378]: 2025-05-15 01:11:21.150 [INFO][3766] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:21.155100 systemd[1]: run-netns-cni\x2d842cbd78\x2d896e\x2df011\x2d2c36\x2dec08a869806e.mount: Deactivated successfully. May 15 01:11:21.156152 env[1378]: time="2025-05-15T01:11:21.156128227Z" level=info msg="TearDown network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" successfully" May 15 01:11:21.156152 env[1378]: time="2025-05-15T01:11:21.156150980Z" level=info msg="StopPodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" returns successfully" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:20.799 [INFO][3765] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:20.800 [INFO][3765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" iface="eth0" netns="/var/run/netns/cni-ba61b6ca-2cd1-d33b-9ecb-dff0424cb263" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:20.800 [INFO][3765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" iface="eth0" netns="/var/run/netns/cni-ba61b6ca-2cd1-d33b-9ecb-dff0424cb263" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:20.801 [INFO][3765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" iface="eth0" netns="/var/run/netns/cni-ba61b6ca-2cd1-d33b-9ecb-dff0424cb263" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:20.801 [INFO][3765] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:20.801 [INFO][3765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.135 [INFO][3780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.137 [INFO][3780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.148 [INFO][3780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.159 [WARNING][3780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.159 [INFO][3780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.160 [INFO][3780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:21.164112 env[1378]: 2025-05-15 01:11:21.161 [INFO][3765] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:21.165723 systemd[1]: run-netns-cni\x2dba61b6ca\x2d2cd1\x2dd33b\x2d9ecb\x2ddff0424cb263.mount: Deactivated successfully. May 15 01:11:21.166564 env[1378]: time="2025-05-15T01:11:21.166509919Z" level=info msg="TearDown network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" successfully" May 15 01:11:21.166564 env[1378]: time="2025-05-15T01:11:21.166532696Z" level=info msg="StopPodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" returns successfully" May 15 01:11:21.170467 env[1378]: time="2025-05-15T01:11:21.170449396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wxnk,Uid:e99f5774-1995-4a8c-89ab-22334a39be19,Namespace:kube-system,Attempt:1,}" May 15 01:11:21.171422 env[1378]: time="2025-05-15T01:11:21.171402563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b8879-bsbbc,Uid:9ec7ac22-58ee-4832-ade2-3b509e93036d,Namespace:calico-apiserver,Attempt:1,}" May 15 01:11:21.226165 systemd-networkd[1141]: vxlan.calico: Gained IPv6LL May 15 01:11:21.464387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic9c6b6a0d48: link becomes ready May 15 01:11:21.462734 systemd-networkd[1141]: calic9c6b6a0d48: Link UP May 15 01:11:21.464841 systemd-networkd[1141]: calic9c6b6a0d48: Gained carrier May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.230 [INFO][3796] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0 calico-apiserver-554b8879- calico-apiserver 9ec7ac22-58ee-4832-ade2-3b509e93036d 762 0 2025-05-15 01:10:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:554b8879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-554b8879-bsbbc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9c6b6a0d48 [] []}} ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.231 [INFO][3796] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.281 [INFO][3817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" HandleID="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.356 [INFO][3817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" HandleID="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-554b8879-bsbbc", "timestamp":"2025-05-15 01:11:21.281819315 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.356 [INFO][3817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.356 [INFO][3817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.356 [INFO][3817] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.357 [INFO][3817] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.432 [INFO][3817] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.436 [INFO][3817] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.438 [INFO][3817] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.440 [INFO][3817] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.440 [INFO][3817] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.441 [INFO][3817] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.444 [INFO][3817] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.447 [INFO][3817] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.447 [INFO][3817] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" host="localhost" May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.447 [INFO][3817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:21.476185 env[1378]: 2025-05-15 01:11:21.447 [INFO][3817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" HandleID="k8s-pod-network.a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.480411 env[1378]: 2025-05-15 01:11:21.455 [INFO][3796] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0", GenerateName:"calico-apiserver-554b8879-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ec7ac22-58ee-4832-ade2-3b509e93036d", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b8879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-554b8879-bsbbc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9c6b6a0d48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:21.480411 env[1378]: 2025-05-15 01:11:21.458 [INFO][3796] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.480411 env[1378]: 2025-05-15 01:11:21.458 [INFO][3796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9c6b6a0d48 ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.480411 env[1378]: 2025-05-15 01:11:21.465 [INFO][3796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.480411 env[1378]: 2025-05-15 01:11:21.465 [INFO][3796] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0", GenerateName:"calico-apiserver-554b8879-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ec7ac22-58ee-4832-ade2-3b509e93036d", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b8879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f", Pod:"calico-apiserver-554b8879-bsbbc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9c6b6a0d48", MAC:"d2:02:ac:9b:81:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:21.480411 env[1378]: 2025-05-15 01:11:21.473 [INFO][3796] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bsbbc" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:21.482440 systemd-networkd[1141]: cali67f0c615d95: Link UP May 15 01:11:21.483809 systemd-networkd[1141]: cali67f0c615d95: Gained carrier May 15 01:11:21.484269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali67f0c615d95: link becomes ready May 15 01:11:21.494000 audit[3841]: NETFILTER_CFG table=filter:101 family=2 entries=40 op=nft_register_chain pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:21.494000 audit[3841]: SYSCALL arch=c000003e syscall=46 success=yes exit=23492 a0=3 a1=7ffc26902280 a2=0 a3=7ffc2690226c items=0 ppid=3541 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:21.494000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.230 [INFO][3791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0 coredns-7db6d8ff4d- kube-system e99f5774-1995-4a8c-89ab-22334a39be19 761 0 2025-05-15 01:10:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5wxnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67f0c615d95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.230 [INFO][3791] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.378 [INFO][3815] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" HandleID="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.394 [INFO][3815] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" HandleID="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5wxnk", "timestamp":"2025-05-15 01:11:21.378840015 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.394 [INFO][3815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.447 [INFO][3815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.447 [INFO][3815] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.450 [INFO][3815] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.452 [INFO][3815] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.455 [INFO][3815] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.456 [INFO][3815] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.460 [INFO][3815] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.460 [INFO][3815] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.460 [INFO][3815] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75 May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.467 [INFO][3815] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.470 [INFO][3815] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.470 [INFO][3815] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" host="localhost" May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.470 [INFO][3815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:21.509753 env[1378]: 2025-05-15 01:11:21.470 [INFO][3815] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" HandleID="k8s-pod-network.efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.510542 env[1378]: 2025-05-15 01:11:21.477 [INFO][3791] cni-plugin/k8s.go 386: Populated endpoint ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e99f5774-1995-4a8c-89ab-22334a39be19", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5wxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67f0c615d95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:21.510542 env[1378]: 2025-05-15 01:11:21.478 [INFO][3791] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.510542 env[1378]: 2025-05-15 01:11:21.478 [INFO][3791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67f0c615d95 ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.510542 env[1378]: 2025-05-15 01:11:21.484 [INFO][3791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.510542 env[1378]: 2025-05-15 01:11:21.484 [INFO][3791] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e99f5774-1995-4a8c-89ab-22334a39be19", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75", Pod:"coredns-7db6d8ff4d-5wxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67f0c615d95", MAC:"62:49:d3:43:eb:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:21.510542 env[1378]: 2025-05-15 01:11:21.506 [INFO][3791] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wxnk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:21.561304 env[1378]: time="2025-05-15T01:11:21.560297591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:21.561304 env[1378]: time="2025-05-15T01:11:21.560324520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:21.561304 env[1378]: time="2025-05-15T01:11:21.560332036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:21.561304 env[1378]: time="2025-05-15T01:11:21.560408020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f pid=3871 runtime=io.containerd.runc.v2 May 15 01:11:21.561869 env[1378]: time="2025-05-15T01:11:21.561842498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:21.561945 env[1378]: time="2025-05-15T01:11:21.561931465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:21.562019 env[1378]: time="2025-05-15T01:11:21.562001623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:21.562170 env[1378]: time="2025-05-15T01:11:21.562153446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75 pid=3867 runtime=io.containerd.runc.v2 May 15 01:11:21.575000 audit[3895]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:21.575000 audit[3895]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffd14337ef0 a2=0 a3=7ffd14337edc items=0 ppid=3541 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:21.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:21.592386 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:21.606192 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:21.619117 env[1378]: time="2025-05-15T01:11:21.619085160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wxnk,Uid:e99f5774-1995-4a8c-89ab-22334a39be19,Namespace:kube-system,Attempt:1,} returns sandbox id \"efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75\"" May 15 01:11:21.622116 env[1378]: time="2025-05-15T01:11:21.622092942Z" level=info msg="CreateContainer within sandbox \"efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 01:11:21.638026 env[1378]: time="2025-05-15T01:11:21.637977195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b8879-bsbbc,Uid:9ec7ac22-58ee-4832-ade2-3b509e93036d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f\"" May 15 01:11:21.640003 env[1378]: time="2025-05-15T01:11:21.639978135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 01:11:21.641135 env[1378]: time="2025-05-15T01:11:21.641117837Z" level=info msg="CreateContainer within sandbox \"efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81d36a5c48e212635cdb5f992b4ef5c75c2b67b08df3580921e7992a187736a9\"" May 15 01:11:21.641522 env[1378]: time="2025-05-15T01:11:21.641506688Z" level=info msg="StartContainer for \"81d36a5c48e212635cdb5f992b4ef5c75c2b67b08df3580921e7992a187736a9\"" May 15 01:11:21.672824 env[1378]: time="2025-05-15T01:11:21.672798070Z" level=info msg="StartContainer for \"81d36a5c48e212635cdb5f992b4ef5c75c2b67b08df3580921e7992a187736a9\" returns successfully" May 15 01:11:22.051048 kubelet[2379]: I0515 01:11:22.050990 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5wxnk" podStartSLOduration=32.050974741 podStartE2EDuration="32.050974741s" podCreationTimestamp="2025-05-15 01:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:11:22.048913531 +0000 UTC m=+45.520796627" watchObservedRunningTime="2025-05-15 01:11:22.050974741 +0000 UTC m=+45.522857831" May 15 01:11:22.103000 audit[3979]: NETFILTER_CFG table=filter:103 family=2 entries=16 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:22.103000 audit[3979]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe250a8720 a2=0 a3=7ffe250a870c items=0 ppid=2516 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:22.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:22.107000 audit[3979]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:22.107000 audit[3979]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe250a8720 a2=0 a3=0 items=0 ppid=2516 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:22.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:22.569340 systemd-networkd[1141]: calic9c6b6a0d48: Gained IPv6LL May 15 01:11:22.736281 env[1378]: time="2025-05-15T01:11:22.736250750Z" level=info msg="StopPodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\"" May 15 01:11:22.736657 env[1378]: time="2025-05-15T01:11:22.736644393Z" level=info msg="StopPodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\"" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.809 [INFO][4012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.810 [INFO][4012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" iface="eth0" netns="/var/run/netns/cni-bdcb4037-93a0-3c3f-e6d4-2ccbc22c6ddd" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.810 [INFO][4012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" iface="eth0" netns="/var/run/netns/cni-bdcb4037-93a0-3c3f-e6d4-2ccbc22c6ddd" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.810 [INFO][4012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" iface="eth0" netns="/var/run/netns/cni-bdcb4037-93a0-3c3f-e6d4-2ccbc22c6ddd" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.810 [INFO][4012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.810 [INFO][4012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.839 [INFO][4025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.840 [INFO][4025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.840 [INFO][4025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.845 [WARNING][4025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.845 [INFO][4025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.846 [INFO][4025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:22.851273 env[1378]: 2025-05-15 01:11:22.849 [INFO][4012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:22.853337 systemd[1]: run-netns-cni\x2dbdcb4037\x2d93a0\x2d3c3f\x2de6d4\x2d2ccbc22c6ddd.mount: Deactivated successfully. May 15 01:11:22.853662 env[1378]: time="2025-05-15T01:11:22.853615372Z" level=info msg="TearDown network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" successfully" May 15 01:11:22.853715 env[1378]: time="2025-05-15T01:11:22.853703670Z" level=info msg="StopPodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" returns successfully" May 15 01:11:22.854502 env[1378]: time="2025-05-15T01:11:22.854485041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxhjw,Uid:9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5,Namespace:calico-system,Attempt:1,}" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.817 [INFO][4013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.817 [INFO][4013] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" iface="eth0" netns="/var/run/netns/cni-0945a161-ab34-727a-a538-a03bf80d7a3c" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.817 [INFO][4013] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" iface="eth0" netns="/var/run/netns/cni-0945a161-ab34-727a-a538-a03bf80d7a3c" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.818 [INFO][4013] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" iface="eth0" netns="/var/run/netns/cni-0945a161-ab34-727a-a538-a03bf80d7a3c" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.818 [INFO][4013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.818 [INFO][4013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.843 [INFO][4027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.843 [INFO][4027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.846 [INFO][4027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.849 [WARNING][4027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.850 [INFO][4027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.850 [INFO][4027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:22.856183 env[1378]: 2025-05-15 01:11:22.855 [INFO][4013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:22.857803 systemd[1]: run-netns-cni\x2d0945a161\x2dab34\x2d727a\x2da538\x2da03bf80d7a3c.mount: Deactivated successfully. May 15 01:11:22.858752 env[1378]: time="2025-05-15T01:11:22.858730627Z" level=info msg="TearDown network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" successfully" May 15 01:11:22.858796 env[1378]: time="2025-05-15T01:11:22.858751413Z" level=info msg="StopPodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" returns successfully" May 15 01:11:22.859069 env[1378]: time="2025-05-15T01:11:22.859052562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frhnb,Uid:3b6acc2c-032c-40b8-85fc-0967a7269b9b,Namespace:kube-system,Attempt:1,}" May 15 01:11:22.940376 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 01:11:22.940440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2df672251aa: link becomes ready May 15 01:11:22.937904 systemd-networkd[1141]: cali2df672251aa: Link UP May 15 01:11:22.940043 systemd-networkd[1141]: cali2df672251aa: Gained carrier May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.894 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lxhjw-eth0 csi-node-driver- calico-system 9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5 782 0 2025-05-15 01:10:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lxhjw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2df672251aa [] []}} ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.894 [INFO][4038] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.915 [INFO][4063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" HandleID="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.920 [INFO][4063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" HandleID="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lxhjw", "timestamp":"2025-05-15 01:11:22.915264016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.920 [INFO][4063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.920 [INFO][4063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.920 [INFO][4063] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.921 [INFO][4063] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.923 [INFO][4063] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.925 [INFO][4063] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.926 [INFO][4063] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.927 [INFO][4063] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.928 [INFO][4063] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.928 [INFO][4063] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268 May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.931 [INFO][4063] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.934 [INFO][4063] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.934 [INFO][4063] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" host="localhost" May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.934 [INFO][4063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:22.948926 env[1378]: 2025-05-15 01:11:22.934 [INFO][4063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" HandleID="k8s-pod-network.f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.949470 env[1378]: 2025-05-15 01:11:22.936 [INFO][4038] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxhjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lxhjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2df672251aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:22.949470 env[1378]: 2025-05-15 01:11:22.936 [INFO][4038] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.949470 env[1378]: 2025-05-15 01:11:22.936 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2df672251aa ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.949470 env[1378]: 2025-05-15 01:11:22.940 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.949470 env[1378]: 2025-05-15 01:11:22.940 [INFO][4038] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxhjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268", Pod:"csi-node-driver-lxhjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2df672251aa", MAC:"26:b4:e7:9f:34:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:22.949470 env[1378]: 2025-05-15 01:11:22.947 [INFO][4038] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268" Namespace="calico-system" Pod="csi-node-driver-lxhjw" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:22.953506 systemd-networkd[1141]: cali67f0c615d95: Gained IPv6LL May 15 01:11:22.976000 audit[4104]: NETFILTER_CFG table=filter:105 family=2 entries=42 op=nft_register_chain pid=4104 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:22.976000 audit[4104]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffd72250ca0 a2=0 a3=7ffd72250c8c items=0 ppid=3541 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:22.976000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:22.989973 systemd-networkd[1141]: cali57be6c0c532: Link UP May 15 01:11:22.991888 systemd-networkd[1141]: cali57be6c0c532: Gained carrier May 15 01:11:22.992266 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali57be6c0c532: link becomes ready May 15 01:11:22.998474 env[1378]: time="2025-05-15T01:11:22.998417872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:22.998474 env[1378]: time="2025-05-15T01:11:22.998454664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.909 [INFO][4046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0 coredns-7db6d8ff4d- kube-system 3b6acc2c-032c-40b8-85fc-0967a7269b9b 783 0 2025-05-15 01:10:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-frhnb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57be6c0c532 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.909 [INFO][4046] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.950 [INFO][4071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" HandleID="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.962 [INFO][4071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" HandleID="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000210230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-frhnb", "timestamp":"2025-05-15 01:11:22.95098375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.962 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.962 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.962 [INFO][4071] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.963 [INFO][4071] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.965 [INFO][4071] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.969 [INFO][4071] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.972 [INFO][4071] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.974 [INFO][4071] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.974 [INFO][4071] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.976 [INFO][4071] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805 May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.978 [INFO][4071] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.982 [INFO][4071] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.982 [INFO][4071] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" host="localhost" May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.982 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:23.002696 env[1378]: 2025-05-15 01:11:22.982 [INFO][4071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" HandleID="k8s-pod-network.da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.003402 env[1378]: 2025-05-15 01:11:22.987 [INFO][4046] cni-plugin/k8s.go 386: Populated endpoint ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b6acc2c-032c-40b8-85fc-0967a7269b9b", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-frhnb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57be6c0c532", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:23.003402 env[1378]: 2025-05-15 01:11:22.987 [INFO][4046] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.003402 env[1378]: 2025-05-15 01:11:22.987 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57be6c0c532 ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.003402 env[1378]: 2025-05-15 01:11:22.992 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.003402 env[1378]: 2025-05-15 01:11:22.994 [INFO][4046] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b6acc2c-032c-40b8-85fc-0967a7269b9b", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805", Pod:"coredns-7db6d8ff4d-frhnb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57be6c0c532", MAC:"12:88:9e:a3:ac:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:23.003402 env[1378]: 2025-05-15 01:11:23.001 [INFO][4046] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frhnb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:23.010263 env[1378]: time="2025-05-15T01:11:22.998463304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:23.010263 env[1378]: time="2025-05-15T01:11:23.003713626Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268 pid=4099 runtime=io.containerd.runc.v2 May 15 01:11:23.014000 audit[4126]: NETFILTER_CFG table=filter:106 family=2 entries=38 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:23.014000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffca66d5620 a2=0 a3=7ffca66d560c items=0 ppid=3541 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:23.014000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:23.027555 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:23.032693 env[1378]: time="2025-05-15T01:11:23.032583302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:23.032693 env[1378]: time="2025-05-15T01:11:23.032612434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:23.032693 env[1378]: time="2025-05-15T01:11:23.032619578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:23.032875 env[1378]: time="2025-05-15T01:11:23.032846669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805 pid=4148 runtime=io.containerd.runc.v2 May 15 01:11:23.048339 env[1378]: time="2025-05-15T01:11:23.044316455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxhjw,Uid:9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268\"" May 15 01:11:23.058999 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:23.089000 audit[4185]: NETFILTER_CFG table=filter:107 family=2 entries=13 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:23.089000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc6cc7e890 a2=0 a3=7ffc6cc7e87c items=0 ppid=2516 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:23.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:23.092000 audit[4185]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:23.092000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc6cc7e890 a2=0 a3=7ffc6cc7e87c items=0 ppid=2516 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:23.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:23.103695 env[1378]: time="2025-05-15T01:11:23.103635062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frhnb,Uid:3b6acc2c-032c-40b8-85fc-0967a7269b9b,Namespace:kube-system,Attempt:1,} returns sandbox id \"da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805\"" May 15 01:11:23.108623 env[1378]: time="2025-05-15T01:11:23.108594374Z" level=info msg="CreateContainer within sandbox \"da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 01:11:23.115407 env[1378]: time="2025-05-15T01:11:23.115358740Z" level=info msg="CreateContainer within sandbox \"da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c006c8716cd793bba97a2704123ded6659c51d4b19f5ee7689734b55494478d2\"" May 15 01:11:23.116810 env[1378]: time="2025-05-15T01:11:23.116791444Z" level=info msg="StartContainer for \"c006c8716cd793bba97a2704123ded6659c51d4b19f5ee7689734b55494478d2\"" May 15 01:11:23.164128 env[1378]: time="2025-05-15T01:11:23.164088293Z" level=info msg="StartContainer for \"c006c8716cd793bba97a2704123ded6659c51d4b19f5ee7689734b55494478d2\" returns successfully" May 15 01:11:23.736622 env[1378]: time="2025-05-15T01:11:23.736459118Z" level=info msg="StopPodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\"" May 15 01:11:23.736961 env[1378]: time="2025-05-15T01:11:23.736740588Z" level=info msg="StopPodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\"" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.894 [INFO][4264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.894 [INFO][4264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" iface="eth0" netns="/var/run/netns/cni-222e5fe1-15a2-1d15-2d6a-bdeae935d52e" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.895 [INFO][4264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" iface="eth0" netns="/var/run/netns/cni-222e5fe1-15a2-1d15-2d6a-bdeae935d52e" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.896 [INFO][4264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" iface="eth0" netns="/var/run/netns/cni-222e5fe1-15a2-1d15-2d6a-bdeae935d52e" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.896 [INFO][4264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.896 [INFO][4264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.919 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.919 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.919 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.923 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.923 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.924 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:23.927062 env[1378]: 2025-05-15 01:11:23.926 [INFO][4264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:23.930182 env[1378]: time="2025-05-15T01:11:23.928843654Z" level=info msg="TearDown network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" successfully" May 15 01:11:23.930182 env[1378]: time="2025-05-15T01:11:23.928866015Z" level=info msg="StopPodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" returns successfully" May 15 01:11:23.930182 env[1378]: time="2025-05-15T01:11:23.929291157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-srwdw,Uid:a465e0d0-b1f5-4d68-a5ea-fce28821f59f,Namespace:calico-apiserver,Attempt:1,}" May 15 01:11:23.929078 systemd[1]: run-netns-cni\x2d222e5fe1\x2d15a2\x2d1d15\x2d2d6a\x2dbdeae935d52e.mount: Deactivated successfully. May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.908 [INFO][4255] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.908 [INFO][4255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" iface="eth0" netns="/var/run/netns/cni-67d82755-cab9-6301-7644-5e41b660d6dd" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.908 [INFO][4255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" iface="eth0" netns="/var/run/netns/cni-67d82755-cab9-6301-7644-5e41b660d6dd" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.908 [INFO][4255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" iface="eth0" netns="/var/run/netns/cni-67d82755-cab9-6301-7644-5e41b660d6dd" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.909 [INFO][4255] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.909 [INFO][4255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.955 [INFO][4279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.955 [INFO][4279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.955 [INFO][4279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.958 [WARNING][4279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.958 [INFO][4279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.959 [INFO][4279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:23.963474 env[1378]: 2025-05-15 01:11:23.960 [INFO][4255] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:23.966945 env[1378]: time="2025-05-15T01:11:23.965297899Z" level=info msg="TearDown network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" successfully" May 15 01:11:23.966945 env[1378]: time="2025-05-15T01:11:23.965321488Z" level=info msg="StopPodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" returns successfully" May 15 01:11:23.966945 env[1378]: time="2025-05-15T01:11:23.965790761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-stq89,Uid:ab75af42-a771-4175-8a6f-81471c06a1c4,Namespace:calico-apiserver,Attempt:1,}" May 15 01:11:23.965152 systemd[1]: run-netns-cni\x2d67d82755\x2dcab9\x2d6301\x2d7644\x2d5e41b660d6dd.mount: Deactivated successfully. May 15 01:11:24.058248 kubelet[2379]: I0515 01:11:24.056578 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-frhnb" podStartSLOduration=34.056565185 podStartE2EDuration="34.056565185s" podCreationTimestamp="2025-05-15 01:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:11:24.056174132 +0000 UTC m=+47.528057219" watchObservedRunningTime="2025-05-15 01:11:24.056565185 +0000 UTC m=+47.528448278" May 15 01:11:24.073000 audit[4325]: NETFILTER_CFG table=filter:109 family=2 entries=10 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:24.073000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffeeea35b40 a2=0 a3=7ffeeea35b2c items=0 ppid=2516 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:24.073000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:24.113000 audit[4325]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:24.113000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeeea35b40 a2=0 a3=7ffeeea35b2c items=0 ppid=2516 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:24.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:24.140000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 01:11:24.140091 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3a2bddc7410: link becomes ready May 15 01:11:24.140195 systemd-networkd[1141]: cali3a2bddc7410: Link UP May 15 01:11:24.140348 systemd-networkd[1141]: cali3a2bddc7410: Gained carrier May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:23.997 [INFO][4285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0 calico-apiserver-68cd77bbfb- calico-apiserver a465e0d0-b1f5-4d68-a5ea-fce28821f59f 802 0 2025-05-15 01:10:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cd77bbfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68cd77bbfb-srwdw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a2bddc7410 [] []}} ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:23.997 [INFO][4285] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.036 [INFO][4310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.043 [INFO][4310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68cd77bbfb-srwdw", "timestamp":"2025-05-15 01:11:24.03675011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.043 [INFO][4310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.043 [INFO][4310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.043 [INFO][4310] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.055 [INFO][4310] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.069 [INFO][4310] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.072 [INFO][4310] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.076 [INFO][4310] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.079 [INFO][4310] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.079 [INFO][4310] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.081 [INFO][4310] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0 May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.102 [INFO][4310] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.123 [INFO][4310] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.123 [INFO][4310] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" host="localhost" May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.123 [INFO][4310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:24.163675 env[1378]: 2025-05-15 01:11:24.123 [INFO][4310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.164448 env[1378]: 2025-05-15 01:11:24.127 [INFO][4285] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a465e0d0-b1f5-4d68-a5ea-fce28821f59f", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68cd77bbfb-srwdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2bddc7410", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:24.164448 env[1378]: 2025-05-15 01:11:24.127 [INFO][4285] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.164448 env[1378]: 2025-05-15 01:11:24.127 [INFO][4285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a2bddc7410 ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.164448 env[1378]: 2025-05-15 01:11:24.141 [INFO][4285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.164448 env[1378]: 2025-05-15 01:11:24.141 [INFO][4285] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a465e0d0-b1f5-4d68-a5ea-fce28821f59f", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0", Pod:"calico-apiserver-68cd77bbfb-srwdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2bddc7410", MAC:"52:72:98:d6:f5:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:24.164448 env[1378]: 2025-05-15 01:11:24.160 [INFO][4285] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-srwdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:24.173000 audit[4328]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:24.186283 systemd-networkd[1141]: calie82da44cbca: Link UP May 15 01:11:24.187968 systemd-networkd[1141]: calie82da44cbca: Gained carrier May 15 01:11:24.188413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie82da44cbca: link becomes ready May 15 01:11:24.173000 audit[4328]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdedbfe610 a2=0 a3=7ffdedbfe5fc items=0 ppid=2516 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:24.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.002 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0 calico-apiserver-68cd77bbfb- calico-apiserver ab75af42-a771-4175-8a6f-81471c06a1c4 803 0 2025-05-15 01:10:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cd77bbfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68cd77bbfb-stq89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie82da44cbca [] []}} ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.002 [INFO][4297] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.046 [INFO][4315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.061 [INFO][4315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b65d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68cd77bbfb-stq89", "timestamp":"2025-05-15 01:11:24.046931082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.061 [INFO][4315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.123 [INFO][4315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.123 [INFO][4315] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.145 [INFO][4315] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.158 [INFO][4315] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.161 [INFO][4315] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.162 [INFO][4315] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.168 [INFO][4315] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.168 [INFO][4315] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.169 [INFO][4315] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.174 [INFO][4315] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.178 [INFO][4315] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.178 [INFO][4315] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" host="localhost" May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.178 [INFO][4315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:24.202501 env[1378]: 2025-05-15 01:11:24.178 [INFO][4315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.203712 env[1378]: 2025-05-15 01:11:24.183 [INFO][4297] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab75af42-a771-4175-8a6f-81471c06a1c4", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68cd77bbfb-stq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie82da44cbca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:24.203712 env[1378]: 2025-05-15 01:11:24.183 [INFO][4297] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.203712 env[1378]: 2025-05-15 01:11:24.183 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie82da44cbca ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.203712 env[1378]: 2025-05-15 01:11:24.188 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.203712 env[1378]: 2025-05-15 01:11:24.188 [INFO][4297] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab75af42-a771-4175-8a6f-81471c06a1c4", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee", Pod:"calico-apiserver-68cd77bbfb-stq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie82da44cbca", MAC:"8a:55:e2:87:cf:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:24.203712 env[1378]: 2025-05-15 01:11:24.201 [INFO][4297] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Namespace="calico-apiserver" Pod="calico-apiserver-68cd77bbfb-stq89" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:24.213000 audit[4328]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:24.213000 audit[4328]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdedbfe610 a2=0 a3=7ffdedbfe5fc items=0 ppid=2516 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:24.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:24.228000 audit[4356]: NETFILTER_CFG table=filter:113 family=2 entries=46 op=nft_register_chain pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:24.228000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffdb42ed620 a2=0 a3=7ffdb42ed60c items=0 ppid=3541 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:24.228000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:24.246399 env[1378]: time="2025-05-15T01:11:24.230865174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:24.246399 env[1378]: time="2025-05-15T01:11:24.231412730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:24.246399 env[1378]: time="2025-05-15T01:11:24.231420871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:24.246399 env[1378]: time="2025-05-15T01:11:24.231560460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0 pid=4354 runtime=io.containerd.runc.v2 May 15 01:11:24.249526 env[1378]: time="2025-05-15T01:11:24.249468463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:24.249674 env[1378]: time="2025-05-15T01:11:24.249657491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:24.249756 env[1378]: time="2025-05-15T01:11:24.249743204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:24.251575 env[1378]: time="2025-05-15T01:11:24.251532097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee pid=4379 runtime=io.containerd.runc.v2 May 15 01:11:24.289000 audit[4386]: NETFILTER_CFG table=filter:114 family=2 entries=50 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:24.289000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=25080 a0=3 a1=7fff7d536250 a2=0 a3=7fff7d53623c items=0 ppid=3541 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:24.289000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:24.297387 systemd-networkd[1141]: cali57be6c0c532: Gained IPv6LL May 15 01:11:24.361936 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:24.377006 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:24.409668 env[1378]: time="2025-05-15T01:11:24.409637523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-stq89,Uid:ab75af42-a771-4175-8a6f-81471c06a1c4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\"" May 15 01:11:24.419115 env[1378]: time="2025-05-15T01:11:24.419090810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cd77bbfb-srwdw,Uid:a465e0d0-b1f5-4d68-a5ea-fce28821f59f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\"" May 15 01:11:24.617371 systemd-networkd[1141]: cali2df672251aa: Gained IPv6LL May 15 01:11:24.817277 env[1378]: time="2025-05-15T01:11:24.817252462Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:24.818783 env[1378]: time="2025-05-15T01:11:24.818764866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:24.819625 env[1378]: time="2025-05-15T01:11:24.819609187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:24.820421 env[1378]: time="2025-05-15T01:11:24.820407649Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:24.820792 env[1378]: time="2025-05-15T01:11:24.820771386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 01:11:24.826153 env[1378]: time="2025-05-15T01:11:24.826131224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 01:11:24.839166 env[1378]: time="2025-05-15T01:11:24.839135400Z" level=info msg="CreateContainer within sandbox \"a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 01:11:24.845317 env[1378]: time="2025-05-15T01:11:24.845282480Z" level=info msg="CreateContainer within sandbox \"a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"adf2076ae56115d53eae2cdcb013cee32cc59d04451d8b637ce5f66eb8b216ad\"" May 15 01:11:24.846594 env[1378]: time="2025-05-15T01:11:24.846377539Z" level=info msg="StartContainer for \"adf2076ae56115d53eae2cdcb013cee32cc59d04451d8b637ce5f66eb8b216ad\"" May 15 01:11:24.910533 env[1378]: time="2025-05-15T01:11:24.910509358Z" level=info msg="StartContainer for \"adf2076ae56115d53eae2cdcb013cee32cc59d04451d8b637ce5f66eb8b216ad\" returns successfully" May 15 01:11:24.973347 kubelet[2379]: I0515 01:11:24.973323 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:25.159000 audit[4525]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:25.167147 kernel: kauditd_printk_skb: 553 callbacks suppressed May 15 01:11:25.167194 kernel: audit: type=1325 audit(1747271485.159:409): table=filter:115 family=2 entries=10 op=nft_register_rule pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:25.173470 kernel: audit: type=1300 audit(1747271485.159:409): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd48459930 a2=0 a3=7ffd4845991c items=0 ppid=2516 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:25.173513 kernel: audit: type=1327 audit(1747271485.159:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:25.173536 kernel: audit: type=1325 audit(1747271485.169:410): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:25.173551 kernel: audit: type=1300 audit(1747271485.169:410): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd48459930 a2=0 a3=7ffd4845991c items=0 ppid=2516 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:25.159000 audit[4525]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd48459930 a2=0 a3=7ffd4845991c items=0 ppid=2516 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:25.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:25.169000 audit[4525]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:25.177298 kernel: audit: type=1327 audit(1747271485.169:410): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:25.169000 audit[4525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd48459930 a2=0 a3=7ffd4845991c items=0 ppid=2516 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:25.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:25.321319 systemd-networkd[1141]: cali3a2bddc7410: Gained IPv6LL May 15 01:11:25.705351 systemd-networkd[1141]: calie82da44cbca: Gained IPv6LL May 15 01:11:25.735856 env[1378]: time="2025-05-15T01:11:25.735821448Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" May 15 01:11:25.864077 kubelet[2379]: I0515 01:11:25.863968 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-554b8879-bsbbc" podStartSLOduration=25.6766719 podStartE2EDuration="28.863951744s" podCreationTimestamp="2025-05-15 01:10:57 +0000 UTC" firstStartedPulling="2025-05-15 01:11:21.638714038 +0000 UTC m=+45.110597122" lastFinishedPulling="2025-05-15 01:11:24.825993884 +0000 UTC m=+48.297876966" observedRunningTime="2025-05-15 01:11:25.102576558 +0000 UTC m=+48.574459651" watchObservedRunningTime="2025-05-15 01:11:25.863951744 +0000 UTC m=+49.335834830" May 15 01:11:26.088184 kubelet[2379]: I0515 01:11:26.087514 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.863 [INFO][4539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.863 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" iface="eth0" netns="/var/run/netns/cni-47a2d537-a4fd-059a-2865-74c639d4642d" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.864 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" iface="eth0" netns="/var/run/netns/cni-47a2d537-a4fd-059a-2865-74c639d4642d" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.864 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" iface="eth0" netns="/var/run/netns/cni-47a2d537-a4fd-059a-2865-74c639d4642d" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.864 [INFO][4539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.864 [INFO][4539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.931 [INFO][4547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.932 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.932 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.936 [WARNING][4547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.936 [INFO][4547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.957 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:26.203595 env[1378]: 2025-05-15 01:11:25.959 [INFO][4539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:26.226062 env[1378]: time="2025-05-15T01:11:26.206325524Z" level=info msg="TearDown network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" successfully" May 15 01:11:26.226062 env[1378]: time="2025-05-15T01:11:26.206356208Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" returns successfully" May 15 01:11:26.226062 env[1378]: time="2025-05-15T01:11:26.206875419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f75b95c-2fjvp,Uid:285861ef-806b-4ef3-881e-bc2ab411454c,Namespace:calico-system,Attempt:1,}" May 15 01:11:26.205573 systemd[1]: run-netns-cni\x2d47a2d537\x2da4fd\x2d059a\x2d2865\x2d74c639d4642d.mount: Deactivated successfully. May 15 01:11:26.444833 systemd-networkd[1141]: cali8926d0478d6: Link UP May 15 01:11:26.446907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 01:11:26.446934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8926d0478d6: link becomes ready May 15 01:11:26.447053 systemd-networkd[1141]: cali8926d0478d6: Gained carrier May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.351 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0 calico-kube-controllers-77f75b95c- calico-system 285861ef-806b-4ef3-881e-bc2ab411454c 834 0 2025-05-15 01:10:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77f75b95c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77f75b95c-2fjvp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8926d0478d6 [] []}} ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.351 [INFO][4554] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.403 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.408 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2460), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77f75b95c-2fjvp", "timestamp":"2025-05-15 01:11:26.403716961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.408 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.408 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.408 [INFO][4567] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.410 [INFO][4567] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.412 [INFO][4567] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.414 [INFO][4567] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.415 [INFO][4567] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.416 [INFO][4567] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.416 [INFO][4567] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.417 [INFO][4567] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.431 [INFO][4567] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.441 [INFO][4567] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.441 [INFO][4567] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" host="localhost" May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.441 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:26.482073 env[1378]: 2025-05-15 01:11:26.441 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.487000 audit[4578]: NETFILTER_CFG table=filter:117 family=2 entries=54 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:26.506371 kernel: audit: type=1325 audit(1747271486.487:411): table=filter:117 family=2 entries=54 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:26.506421 kernel: audit: type=1300 audit(1747271486.487:411): arch=c000003e syscall=46 success=yes exit=24580 a0=3 a1=7ffd43cecb30 a2=0 a3=7ffd43cecb1c items=0 ppid=3541 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:26.506438 kernel: audit: type=1327 audit(1747271486.487:411): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:26.487000 audit[4578]: SYSCALL arch=c000003e syscall=46 success=yes exit=24580 a0=3 a1=7ffd43cecb30 a2=0 a3=7ffd43cecb1c items=0 ppid=3541 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:26.487000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:26.506568 env[1378]: 2025-05-15 01:11:26.443 [INFO][4554] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0", GenerateName:"calico-kube-controllers-77f75b95c-", Namespace:"calico-system", SelfLink:"", UID:"285861ef-806b-4ef3-881e-bc2ab411454c", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f75b95c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77f75b95c-2fjvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8926d0478d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:26.506568 env[1378]: 2025-05-15 01:11:26.443 [INFO][4554] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.506568 env[1378]: 2025-05-15 01:11:26.443 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8926d0478d6 ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.506568 env[1378]: 2025-05-15 01:11:26.446 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.506568 env[1378]: 2025-05-15 01:11:26.446 [INFO][4554] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0", GenerateName:"calico-kube-controllers-77f75b95c-", Namespace:"calico-system", SelfLink:"", UID:"285861ef-806b-4ef3-881e-bc2ab411454c", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f75b95c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba", Pod:"calico-kube-controllers-77f75b95c-2fjvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8926d0478d6", MAC:"2a:fc:c1:a9:11:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:26.506568 env[1378]: 2025-05-15 01:11:26.480 [INFO][4554] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Namespace="calico-system" Pod="calico-kube-controllers-77f75b95c-2fjvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:26.516338 env[1378]: time="2025-05-15T01:11:26.516260379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:26.516412 env[1378]: time="2025-05-15T01:11:26.516368460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:26.516412 env[1378]: time="2025-05-15T01:11:26.516385395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:26.524310 env[1378]: time="2025-05-15T01:11:26.516584964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba pid=4594 runtime=io.containerd.runc.v2 May 15 01:11:26.539674 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:26.567459 env[1378]: time="2025-05-15T01:11:26.567222058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f75b95c-2fjvp,Uid:285861ef-806b-4ef3-881e-bc2ab411454c,Namespace:calico-system,Attempt:1,} returns sandbox id \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\"" May 15 01:11:26.767006 env[1378]: time="2025-05-15T01:11:26.765740842Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:26.767576 env[1378]: time="2025-05-15T01:11:26.767559005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:26.768259 env[1378]: time="2025-05-15T01:11:26.768246723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:26.768989 env[1378]: time="2025-05-15T01:11:26.768976787Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:26.769445 env[1378]: time="2025-05-15T01:11:26.769425632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 01:11:26.771428 env[1378]: time="2025-05-15T01:11:26.770250222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 01:11:26.777546 env[1378]: time="2025-05-15T01:11:26.777519803Z" level=info msg="CreateContainer within sandbox \"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 01:11:26.804744 env[1378]: time="2025-05-15T01:11:26.804721637Z" level=info msg="CreateContainer within sandbox \"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"612c57b307556249796abb298a3ee74fe9bdeda908d7c0fc493cb2abe889e176\"" May 15 01:11:26.806140 env[1378]: time="2025-05-15T01:11:26.806118012Z" level=info msg="StartContainer for \"612c57b307556249796abb298a3ee74fe9bdeda908d7c0fc493cb2abe889e176\"" May 15 01:11:26.894869 env[1378]: time="2025-05-15T01:11:26.894833027Z" level=info msg="StartContainer for \"612c57b307556249796abb298a3ee74fe9bdeda908d7c0fc493cb2abe889e176\" returns successfully" May 15 01:11:27.133386 env[1378]: time="2025-05-15T01:11:27.133024928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.134497 env[1378]: time="2025-05-15T01:11:27.134484442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.136457 env[1378]: time="2025-05-15T01:11:27.136440849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.147283 env[1378]: time="2025-05-15T01:11:27.147261834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 01:11:27.153076 env[1378]: time="2025-05-15T01:11:27.153042466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 01:11:27.153528 env[1378]: time="2025-05-15T01:11:27.146804197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.156175 env[1378]: time="2025-05-15T01:11:27.154982640Z" level=info msg="CreateContainer within sandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 01:11:27.162644 env[1378]: time="2025-05-15T01:11:27.162616538Z" level=info msg="CreateContainer within sandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\"" May 15 01:11:27.163096 env[1378]: time="2025-05-15T01:11:27.163078762Z" level=info msg="StartContainer for \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\"" May 15 01:11:27.239980 env[1378]: time="2025-05-15T01:11:27.239953735Z" level=info msg="StartContainer for \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\" returns successfully" May 15 01:11:27.504158 env[1378]: time="2025-05-15T01:11:27.504135467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.505170 env[1378]: time="2025-05-15T01:11:27.505157181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.506303 env[1378]: time="2025-05-15T01:11:27.506289874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.507435 env[1378]: time="2025-05-15T01:11:27.507422808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:27.507864 env[1378]: time="2025-05-15T01:11:27.507850645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 01:11:27.508902 env[1378]: time="2025-05-15T01:11:27.508844490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 01:11:27.510872 env[1378]: time="2025-05-15T01:11:27.510856740Z" level=info msg="CreateContainer within sandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 01:11:27.528376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100595536.mount: Deactivated successfully. May 15 01:11:27.550051 env[1378]: time="2025-05-15T01:11:27.550019783Z" level=info msg="CreateContainer within sandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\"" May 15 01:11:27.575285 env[1378]: time="2025-05-15T01:11:27.575260180Z" level=info msg="StartContainer for \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\"" May 15 01:11:27.603942 env[1378]: time="2025-05-15T01:11:27.603919323Z" level=info msg="StopContainer for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" with timeout 300 (s)" May 15 01:11:27.611527 env[1378]: time="2025-05-15T01:11:27.611506989Z" level=info msg="Stop container \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" with signal terminated" May 15 01:11:27.699000 audit[4743]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:27.711211 kernel: audit: type=1325 audit(1747271487.699:412): table=filter:118 family=2 entries=10 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:27.699000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff9ec0f270 a2=0 a3=7fff9ec0f25c items=0 ppid=2516 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:27.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:27.705000 audit[4743]: NETFILTER_CFG table=nat:119 family=2 entries=28 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:27.705000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7fff9ec0f270 a2=0 a3=7fff9ec0f25c items=0 ppid=2516 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:27.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:27.736135 env[1378]: time="2025-05-15T01:11:27.736107938Z" level=info msg="StartContainer for \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\" returns successfully" May 15 01:11:27.817366 systemd-networkd[1141]: cali8926d0478d6: Gained IPv6LL May 15 01:11:28.205671 systemd[1]: run-containerd-runc-k8s.io-eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1-runc.EU1vCm.mount: Deactivated successfully. May 15 01:11:28.346000 audit[4762]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:28.346000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd0d0749d0 a2=0 a3=7ffd0d0749bc items=0 ppid=2516 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:28.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:28.353358 kubelet[2379]: I0515 01:11:28.350912 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68cd77bbfb-srwdw" podStartSLOduration=29.258375095 podStartE2EDuration="32.347261437s" podCreationTimestamp="2025-05-15 01:10:56 +0000 UTC" firstStartedPulling="2025-05-15 01:11:24.419817987 +0000 UTC m=+47.891701078" lastFinishedPulling="2025-05-15 01:11:27.508704339 +0000 UTC m=+50.980587420" observedRunningTime="2025-05-15 01:11:28.347048875 +0000 UTC m=+51.818931977" watchObservedRunningTime="2025-05-15 01:11:28.347261437 +0000 UTC m=+51.819144525" May 15 01:11:28.353000 audit[4762]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:28.353000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd0d0749d0 a2=0 a3=7ffd0d0749bc items=0 ppid=2516 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:28.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:28.424518 env[1378]: time="2025-05-15T01:11:28.424486451Z" level=info msg="StopContainer for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" with timeout 5 (s)" May 15 01:11:28.424809 env[1378]: time="2025-05-15T01:11:28.424703400Z" level=info msg="Stop container \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" with signal terminated" May 15 01:11:28.466668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1-rootfs.mount: Deactivated successfully. May 15 01:11:28.469526 env[1378]: time="2025-05-15T01:11:28.467450398Z" level=info msg="shim disconnected" id=eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1 May 15 01:11:28.469526 env[1378]: time="2025-05-15T01:11:28.467475326Z" level=warning msg="cleaning up after shim disconnected" id=eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1 namespace=k8s.io May 15 01:11:28.469526 env[1378]: time="2025-05-15T01:11:28.467481080Z" level=info msg="cleaning up dead shim" May 15 01:11:28.482194 env[1378]: time="2025-05-15T01:11:28.481071746Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4782 runtime=io.containerd.runc.v2\n" May 15 01:11:28.557881 env[1378]: time="2025-05-15T01:11:28.557847760Z" level=info msg="StopContainer for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" returns successfully" May 15 01:11:28.596542 env[1378]: time="2025-05-15T01:11:28.596520458Z" level=info msg="StopPodSandbox for \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\"" May 15 01:11:28.599133 env[1378]: time="2025-05-15T01:11:28.596648943Z" level=info msg="Container to stop \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:11:28.599133 env[1378]: time="2025-05-15T01:11:28.596661406Z" level=info msg="Container to stop \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:11:28.599133 env[1378]: time="2025-05-15T01:11:28.596669847Z" level=info msg="Container to stop \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:11:28.598535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7-shm.mount: Deactivated successfully. May 15 01:11:28.617286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7-rootfs.mount: Deactivated successfully. May 15 01:11:28.619273 env[1378]: time="2025-05-15T01:11:28.619176110Z" level=info msg="shim disconnected" id=576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7 May 15 01:11:28.619381 env[1378]: time="2025-05-15T01:11:28.619370196Z" level=warning msg="cleaning up after shim disconnected" id=576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7 namespace=k8s.io May 15 01:11:28.619848 env[1378]: time="2025-05-15T01:11:28.619424411Z" level=info msg="cleaning up dead shim" May 15 01:11:28.638457 env[1378]: time="2025-05-15T01:11:28.638434562Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4815 runtime=io.containerd.runc.v2\n" May 15 01:11:28.641155 env[1378]: time="2025-05-15T01:11:28.641139115Z" level=info msg="TearDown network for sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" successfully" May 15 01:11:28.641267 env[1378]: time="2025-05-15T01:11:28.641256549Z" level=info msg="StopPodSandbox for \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" returns successfully" May 15 01:11:28.654890 kubelet[2379]: I0515 01:11:28.654860 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68cd77bbfb-stq89" podStartSLOduration=29.913957425 podStartE2EDuration="32.654846751s" podCreationTimestamp="2025-05-15 01:10:56 +0000 UTC" firstStartedPulling="2025-05-15 01:11:24.411717097 +0000 UTC m=+47.883600181" lastFinishedPulling="2025-05-15 01:11:27.152606425 +0000 UTC m=+50.624489507" observedRunningTime="2025-05-15 01:11:28.371428125 +0000 UTC m=+51.843311218" watchObservedRunningTime="2025-05-15 01:11:28.654846751 +0000 UTC m=+52.126729844" May 15 01:11:28.691063 kubelet[2379]: I0515 01:11:28.691031 2379 topology_manager.go:215] "Topology Admit Handler" podUID="4b1f1225-f5f3-41d4-91c6-141700147b1e" podNamespace="calico-system" podName="calico-node-26ctq" May 15 01:11:28.699864 kubelet[2379]: E0515 01:11:28.699846 2379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="189c4de8-e8f1-4114-a090-fa3fd31b5ca8" containerName="flexvol-driver" May 15 01:11:28.700003 kubelet[2379]: E0515 01:11:28.699995 2379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="189c4de8-e8f1-4114-a090-fa3fd31b5ca8" containerName="install-cni" May 15 01:11:28.700056 kubelet[2379]: E0515 01:11:28.700048 2379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="189c4de8-e8f1-4114-a090-fa3fd31b5ca8" containerName="calico-node" May 15 01:11:28.705662 kubelet[2379]: I0515 01:11:28.705651 2379 memory_manager.go:354] "RemoveStaleState removing state" podUID="189c4de8-e8f1-4114-a090-fa3fd31b5ca8" containerName="calico-node" May 15 01:11:28.918787 kubelet[2379]: I0515 01:11:28.918043 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-bin-dir\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.918787 kubelet[2379]: I0515 01:11:28.918091 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-tigera-ca-bundle\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.918787 kubelet[2379]: I0515 01:11:28.918117 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-net-dir\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.918787 kubelet[2379]: I0515 01:11:28.918136 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bccsw\" (UniqueName: \"kubernetes.io/projected/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-kube-api-access-bccsw\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.918787 kubelet[2379]: I0515 01:11:28.918149 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-run-calico\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.918787 kubelet[2379]: I0515 01:11:28.918163 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-lib-modules\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920451 kubelet[2379]: I0515 01:11:28.918175 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-log-dir\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920451 kubelet[2379]: I0515 01:11:28.918189 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-flexvol-driver-host\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920451 kubelet[2379]: I0515 01:11:28.918201 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-policysync\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920451 kubelet[2379]: I0515 01:11:28.918215 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-node-certs\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920451 kubelet[2379]: I0515 01:11:28.918225 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-lib-calico\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920451 kubelet[2379]: I0515 01:11:28.918263 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-xtables-lock\") pod \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\" (UID: \"189c4de8-e8f1-4114-a090-fa3fd31b5ca8\") " May 15 01:11:28.920651 kubelet[2379]: I0515 01:11:28.918320 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-cni-bin-dir\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920651 kubelet[2379]: I0515 01:11:28.918344 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-cni-net-dir\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920651 kubelet[2379]: I0515 01:11:28.918363 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-policysync\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920651 kubelet[2379]: I0515 01:11:28.918409 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-var-run-calico\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920651 kubelet[2379]: I0515 01:11:28.918426 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-cni-log-dir\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920829 kubelet[2379]: I0515 01:11:28.918444 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-flexvol-driver-host\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920829 kubelet[2379]: I0515 01:11:28.918458 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b1f1225-f5f3-41d4-91c6-141700147b1e-tigera-ca-bundle\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920829 kubelet[2379]: I0515 01:11:28.918472 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-var-lib-calico\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920829 kubelet[2379]: I0515 01:11:28.918485 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-xtables-lock\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.920829 kubelet[2379]: I0515 01:11:28.918499 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b1f1225-f5f3-41d4-91c6-141700147b1e-node-certs\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.921730 kubelet[2379]: I0515 01:11:28.918512 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b1f1225-f5f3-41d4-91c6-141700147b1e-lib-modules\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.921730 kubelet[2379]: I0515 01:11:28.918525 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6bdq\" (UniqueName: \"kubernetes.io/projected/4b1f1225-f5f3-41d4-91c6-141700147b1e-kube-api-access-j6bdq\") pod \"calico-node-26ctq\" (UID: \"4b1f1225-f5f3-41d4-91c6-141700147b1e\") " pod="calico-system/calico-node-26ctq" May 15 01:11:28.923919 kubelet[2379]: I0515 01:11:28.922443 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.924172 kubelet[2379]: I0515 01:11:28.924149 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.942061 systemd[1]: var-lib-kubelet-pods-189c4de8\x2de8f1\x2d4114\x2da090\x2dfa3fd31b5ca8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 15 01:11:28.946346 kubelet[2379]: I0515 01:11:28.946319 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.946443 kubelet[2379]: I0515 01:11:28.946366 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.946443 kubelet[2379]: I0515 01:11:28.946392 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.946443 kubelet[2379]: I0515 01:11:28.946407 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.946443 kubelet[2379]: I0515 01:11:28.946422 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-policysync" (OuterVolumeSpecName: "policysync") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.946755 kubelet[2379]: I0515 01:11:28.946743 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.947134 kubelet[2379]: I0515 01:11:28.946839 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 01:11:28.947467 kubelet[2379]: I0515 01:11:28.947433 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 01:11:28.957850 kubelet[2379]: I0515 01:11:28.957769 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-kube-api-access-bccsw" (OuterVolumeSpecName: "kube-api-access-bccsw") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "kube-api-access-bccsw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 01:11:28.960363 kubelet[2379]: I0515 01:11:28.960338 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-node-certs" (OuterVolumeSpecName: "node-certs") pod "189c4de8-e8f1-4114-a090-fa3fd31b5ca8" (UID: "189c4de8-e8f1-4114-a090-fa3fd31b5ca8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 01:11:28.983451 env[1378]: time="2025-05-15T01:11:28.983418520Z" level=info msg="shim disconnected" id=7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5 May 15 01:11:28.983568 env[1378]: time="2025-05-15T01:11:28.983556612Z" level=warning msg="cleaning up after shim disconnected" id=7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5 namespace=k8s.io May 15 01:11:28.983639 env[1378]: time="2025-05-15T01:11:28.983629466Z" level=info msg="cleaning up dead shim" May 15 01:11:28.984000 audit[4845]: NETFILTER_CFG table=filter:122 family=2 entries=9 op=nft_register_rule pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:28.984000 audit[4845]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffce5ab55a0 a2=0 a3=7ffce5ab558c items=0 ppid=2516 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:28.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:28.989000 audit[4845]: NETFILTER_CFG table=nat:123 family=2 entries=27 op=nft_register_chain pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:28.989000 audit[4845]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffce5ab55a0 a2=0 a3=7ffce5ab558c items=0 ppid=2516 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:28.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:28.991139 env[1378]: time="2025-05-15T01:11:28.990065622Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4846 runtime=io.containerd.runc.v2\n" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019560 2379 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019581 2379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bccsw\" (UniqueName: \"kubernetes.io/projected/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-kube-api-access-bccsw\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019587 2379 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019593 2379 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019598 2379 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019603 2379 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019608 2379 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-policysync\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.020030 kubelet[2379]: I0515 01:11:29.019613 2379 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.023272 kubelet[2379]: I0515 01:11:29.019618 2379 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.023272 kubelet[2379]: I0515 01:11:29.019622 2379 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-node-certs\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.023272 kubelet[2379]: I0515 01:11:29.019626 2379 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.023272 kubelet[2379]: I0515 01:11:29.019632 2379 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/189c4de8-e8f1-4114-a090-fa3fd31b5ca8-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.039634 env[1378]: time="2025-05-15T01:11:29.039605903Z" level=info msg="StopContainer for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" returns successfully" May 15 01:11:29.040216 env[1378]: time="2025-05-15T01:11:29.039939881Z" level=info msg="StopPodSandbox for \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\"" May 15 01:11:29.040216 env[1378]: time="2025-05-15T01:11:29.039981006Z" level=info msg="Container to stop \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:11:29.058677 env[1378]: time="2025-05-15T01:11:29.058642814Z" level=info msg="shim disconnected" id=ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f May 15 01:11:29.058677 env[1378]: time="2025-05-15T01:11:29.058676477Z" level=warning msg="cleaning up after shim disconnected" id=ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f namespace=k8s.io May 15 01:11:29.058796 env[1378]: time="2025-05-15T01:11:29.058685343Z" level=info msg="cleaning up dead shim" May 15 01:11:29.065329 env[1378]: time="2025-05-15T01:11:29.065304842Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4881 runtime=io.containerd.runc.v2\n" May 15 01:11:29.087859 env[1378]: time="2025-05-15T01:11:29.087827260Z" level=info msg="TearDown network for sandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" successfully" May 15 01:11:29.087859 env[1378]: time="2025-05-15T01:11:29.087850925Z" level=info msg="StopPodSandbox for \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" returns successfully" May 15 01:11:29.207362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5-rootfs.mount: Deactivated successfully. May 15 01:11:29.207492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f-rootfs.mount: Deactivated successfully. May 15 01:11:29.207583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f-shm.mount: Deactivated successfully. May 15 01:11:29.207660 systemd[1]: var-lib-kubelet-pods-189c4de8\x2de8f1\x2d4114\x2da090\x2dfa3fd31b5ca8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbccsw.mount: Deactivated successfully. May 15 01:11:29.207740 systemd[1]: var-lib-kubelet-pods-189c4de8\x2de8f1\x2d4114\x2da090\x2dfa3fd31b5ca8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 15 01:11:29.220678 kubelet[2379]: I0515 01:11:29.220579 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17854bce-67b1-4319-98ae-62f549924452-typha-certs\") pod \"17854bce-67b1-4319-98ae-62f549924452\" (UID: \"17854bce-67b1-4319-98ae-62f549924452\") " May 15 01:11:29.220678 kubelet[2379]: I0515 01:11:29.220614 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69n4b\" (UniqueName: \"kubernetes.io/projected/17854bce-67b1-4319-98ae-62f549924452-kube-api-access-69n4b\") pod \"17854bce-67b1-4319-98ae-62f549924452\" (UID: \"17854bce-67b1-4319-98ae-62f549924452\") " May 15 01:11:29.220678 kubelet[2379]: I0515 01:11:29.220629 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17854bce-67b1-4319-98ae-62f549924452-tigera-ca-bundle\") pod \"17854bce-67b1-4319-98ae-62f549924452\" (UID: \"17854bce-67b1-4319-98ae-62f549924452\") " May 15 01:11:29.228777 systemd[1]: var-lib-kubelet-pods-17854bce\x2d67b1\x2d4319\x2d98ae\x2d62f549924452-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 15 01:11:29.230933 kubelet[2379]: I0515 01:11:29.230368 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17854bce-67b1-4319-98ae-62f549924452-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "17854bce-67b1-4319-98ae-62f549924452" (UID: "17854bce-67b1-4319-98ae-62f549924452"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 01:11:29.233146 systemd[1]: var-lib-kubelet-pods-17854bce\x2d67b1\x2d4319\x2d98ae\x2d62f549924452-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 15 01:11:29.236192 systemd[1]: var-lib-kubelet-pods-17854bce\x2d67b1\x2d4319\x2d98ae\x2d62f549924452-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69n4b.mount: Deactivated successfully. May 15 01:11:29.237219 kubelet[2379]: I0515 01:11:29.237192 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17854bce-67b1-4319-98ae-62f549924452-kube-api-access-69n4b" (OuterVolumeSpecName: "kube-api-access-69n4b") pod "17854bce-67b1-4319-98ae-62f549924452" (UID: "17854bce-67b1-4319-98ae-62f549924452"). InnerVolumeSpecName "kube-api-access-69n4b". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 01:11:29.237645 kubelet[2379]: I0515 01:11:29.237630 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17854bce-67b1-4319-98ae-62f549924452-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "17854bce-67b1-4319-98ae-62f549924452" (UID: "17854bce-67b1-4319-98ae-62f549924452"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 01:11:29.321439 kubelet[2379]: I0515 01:11:29.321386 2379 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17854bce-67b1-4319-98ae-62f549924452-typha-certs\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.321439 kubelet[2379]: I0515 01:11:29.321412 2379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-69n4b\" (UniqueName: \"kubernetes.io/projected/17854bce-67b1-4319-98ae-62f549924452-kube-api-access-69n4b\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.321439 kubelet[2379]: I0515 01:11:29.321421 2379 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17854bce-67b1-4319-98ae-62f549924452-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 15 01:11:29.328950 env[1378]: time="2025-05-15T01:11:29.328911158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26ctq,Uid:4b1f1225-f5f3-41d4-91c6-141700147b1e,Namespace:calico-system,Attempt:0,}" May 15 01:11:29.354651 env[1378]: time="2025-05-15T01:11:29.354474220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:29.354651 env[1378]: time="2025-05-15T01:11:29.354506021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:29.354651 env[1378]: time="2025-05-15T01:11:29.354537148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:29.354780 env[1378]: time="2025-05-15T01:11:29.354665873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7 pid=4905 runtime=io.containerd.runc.v2 May 15 01:11:29.372957 kubelet[2379]: I0515 01:11:29.372635 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:29.392057 kubelet[2379]: I0515 01:11:29.391673 2379 scope.go:117] "RemoveContainer" containerID="eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1" May 15 01:11:29.393608 env[1378]: time="2025-05-15T01:11:29.393589324Z" level=info msg="RemoveContainer for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\"" May 15 01:11:29.395645 env[1378]: time="2025-05-15T01:11:29.395622974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26ctq,Uid:4b1f1225-f5f3-41d4-91c6-141700147b1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\"" May 15 01:11:29.396584 env[1378]: time="2025-05-15T01:11:29.396570315Z" level=info msg="RemoveContainer for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" returns successfully" May 15 01:11:29.398897 kubelet[2379]: I0515 01:11:29.398538 2379 scope.go:117] "RemoveContainer" containerID="b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d" May 15 01:11:29.405065 env[1378]: time="2025-05-15T01:11:29.405044871Z" level=info msg="CreateContainer within sandbox \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 01:11:29.405315 env[1378]: time="2025-05-15T01:11:29.405303326Z" level=info msg="RemoveContainer for \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\"" May 15 01:11:29.413708 env[1378]: time="2025-05-15T01:11:29.413679880Z" level=info msg="RemoveContainer for \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\" returns successfully" May 15 01:11:29.413892 kubelet[2379]: I0515 01:11:29.413877 2379 scope.go:117] "RemoveContainer" containerID="27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646" May 15 01:11:29.414733 env[1378]: time="2025-05-15T01:11:29.414719548Z" level=info msg="RemoveContainer for \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\"" May 15 01:11:29.417177 env[1378]: time="2025-05-15T01:11:29.416534166Z" level=info msg="CreateContainer within sandbox \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a6f23a7d531bdf08a89bc195a966a694f1ec98b3dd65e204c22dd8e7984ce4c\"" May 15 01:11:29.419656 env[1378]: time="2025-05-15T01:11:29.417544194Z" level=info msg="StartContainer for \"8a6f23a7d531bdf08a89bc195a966a694f1ec98b3dd65e204c22dd8e7984ce4c\"" May 15 01:11:29.421270 env[1378]: time="2025-05-15T01:11:29.421251771Z" level=info msg="RemoveContainer for \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\" returns successfully" May 15 01:11:29.421922 kubelet[2379]: I0515 01:11:29.421855 2379 scope.go:117] "RemoveContainer" containerID="eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1" May 15 01:11:29.422099 env[1378]: time="2025-05-15T01:11:29.422014038Z" level=error msg="ContainerStatus for \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\": not found" May 15 01:11:29.422183 kubelet[2379]: E0515 01:11:29.422168 2379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\": not found" containerID="eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1" May 15 01:11:29.423806 kubelet[2379]: I0515 01:11:29.423785 2379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1"} err="failed to get container status \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"eeaedc8d1e0b5695200030997425bc8b2d1fbf52a485040757a89197cbb507e1\": not found" May 15 01:11:29.423806 kubelet[2379]: I0515 01:11:29.423806 2379 scope.go:117] "RemoveContainer" containerID="b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d" May 15 01:11:29.423956 env[1378]: time="2025-05-15T01:11:29.423927143Z" level=error msg="ContainerStatus for \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\": not found" May 15 01:11:29.424013 kubelet[2379]: E0515 01:11:29.424002 2379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\": not found" containerID="b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d" May 15 01:11:29.424039 kubelet[2379]: I0515 01:11:29.424013 2379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d"} err="failed to get container status \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8e3ded293533724689b73d384e48873141bd44f874e3581d2d8b962c361765d\": not found" May 15 01:11:29.424039 kubelet[2379]: I0515 01:11:29.424022 2379 scope.go:117] "RemoveContainer" containerID="27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646" May 15 01:11:29.424147 env[1378]: time="2025-05-15T01:11:29.424108344Z" level=error msg="ContainerStatus for \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\": not found" May 15 01:11:29.424203 kubelet[2379]: E0515 01:11:29.424181 2379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\": not found" containerID="27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646" May 15 01:11:29.424246 kubelet[2379]: I0515 01:11:29.424194 2379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646"} err="failed to get container status \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\": rpc error: code = NotFound desc = an error occurred when try to find container \"27c434ebcac0e4a265bdd66f8c8962ac6822bba261f500ee32d3083cf603e646\": not found" May 15 01:11:29.424246 kubelet[2379]: I0515 01:11:29.424213 2379 scope.go:117] "RemoveContainer" containerID="7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5" May 15 01:11:29.424795 env[1378]: time="2025-05-15T01:11:29.424781865Z" level=info msg="RemoveContainer for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\"" May 15 01:11:29.429771 env[1378]: time="2025-05-15T01:11:29.429750707Z" level=info msg="RemoveContainer for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" returns successfully" May 15 01:11:29.430445 kubelet[2379]: I0515 01:11:29.430098 2379 scope.go:117] "RemoveContainer" containerID="7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5" May 15 01:11:29.432362 env[1378]: time="2025-05-15T01:11:29.430770072Z" level=error msg="ContainerStatus for \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\": not found" May 15 01:11:29.432421 kubelet[2379]: E0515 01:11:29.430856 2379 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\": not found" containerID="7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5" May 15 01:11:29.432421 kubelet[2379]: I0515 01:11:29.430871 2379 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5"} err="failed to get container status \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b62095b255224e7f2bfe3fa14609e0be0b4f60b6aeba0ff5b4bde28207721d5\": not found" May 15 01:11:29.473035 env[1378]: time="2025-05-15T01:11:29.471928024Z" level=info msg="StartContainer for \"8a6f23a7d531bdf08a89bc195a966a694f1ec98b3dd65e204c22dd8e7984ce4c\" returns successfully" May 15 01:11:29.748125 env[1378]: time="2025-05-15T01:11:29.747931787Z" level=info msg="shim disconnected" id=8a6f23a7d531bdf08a89bc195a966a694f1ec98b3dd65e204c22dd8e7984ce4c May 15 01:11:29.748290 env[1378]: time="2025-05-15T01:11:29.748274888Z" level=warning msg="cleaning up after shim disconnected" id=8a6f23a7d531bdf08a89bc195a966a694f1ec98b3dd65e204c22dd8e7984ce4c namespace=k8s.io May 15 01:11:29.748355 env[1378]: time="2025-05-15T01:11:29.748342167Z" level=info msg="cleaning up dead shim" May 15 01:11:29.766652 env[1378]: time="2025-05-15T01:11:29.766618622Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4987 runtime=io.containerd.runc.v2\n" May 15 01:11:29.999000 audit[5001]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:29.999000 audit[5001]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe2ea58680 a2=0 a3=7ffe2ea5866c items=0 ppid=2516 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:29.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:30.004000 audit[5001]: NETFILTER_CFG table=nat:125 family=2 entries=27 op=nft_unregister_chain pid=5001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:30.004000 audit[5001]: SYSCALL arch=c000003e syscall=46 success=yes exit=6028 a0=3 a1=7ffe2ea58680 a2=0 a3=7ffe2ea5866c items=0 ppid=2516 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:30.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:30.236271 env[1378]: time="2025-05-15T01:11:30.236188586Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:30.252693 env[1378]: time="2025-05-15T01:11:30.252635275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:30.258436 env[1378]: time="2025-05-15T01:11:30.258418632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:30.267725 env[1378]: time="2025-05-15T01:11:30.267513900Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:30.267861 env[1378]: time="2025-05-15T01:11:30.267750371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 15 01:11:30.301947 env[1378]: time="2025-05-15T01:11:30.301922739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 01:11:30.322307 env[1378]: time="2025-05-15T01:11:30.322282514Z" level=info msg="CreateContainer within sandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 01:11:30.331326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208938794.mount: Deactivated successfully. May 15 01:11:30.333327 env[1378]: time="2025-05-15T01:11:30.333298604Z" level=info msg="CreateContainer within sandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\"" May 15 01:11:30.336558 env[1378]: time="2025-05-15T01:11:30.336537028Z" level=info msg="StartContainer for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\"" May 15 01:11:30.455586 env[1378]: time="2025-05-15T01:11:30.455558139Z" level=info msg="StartContainer for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" returns successfully" May 15 01:11:30.588999 env[1378]: time="2025-05-15T01:11:30.588939856Z" level=info msg="CreateContainer within sandbox \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 01:11:30.608264 env[1378]: time="2025-05-15T01:11:30.608229224Z" level=info msg="CreateContainer within sandbox \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83d8482d9e5494bc7c779c15f409b515adb9a27513ffc5fc460f49b59d8a8ec0\"" May 15 01:11:30.613386 env[1378]: time="2025-05-15T01:11:30.613106145Z" level=info msg="StopContainer for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" with timeout 30 (s)" May 15 01:11:30.613529 env[1378]: time="2025-05-15T01:11:30.613497951Z" level=info msg="Stop container \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" with signal terminated" May 15 01:11:30.627194 env[1378]: time="2025-05-15T01:11:30.626400845Z" level=info msg="StartContainer for \"83d8482d9e5494bc7c779c15f409b515adb9a27513ffc5fc460f49b59d8a8ec0\"" May 15 01:11:30.665729 env[1378]: time="2025-05-15T01:11:30.665697708Z" level=error msg="ExecSync for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" failed" error="failed to exec in container: failed to start exec \"dd0a9c3fa02938f93cf4bf9869fbaf8adb9b2545c5fef93e9870647a4e0bb368\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" May 15 01:11:30.675339 env[1378]: time="2025-05-15T01:11:30.675276134Z" level=info msg="shim disconnected" id=4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5 May 15 01:11:30.675556 env[1378]: time="2025-05-15T01:11:30.675544997Z" level=warning msg="cleaning up after shim disconnected" id=4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5 namespace=k8s.io May 15 01:11:30.675795 env[1378]: time="2025-05-15T01:11:30.675784692Z" level=info msg="cleaning up dead shim" May 15 01:11:30.677421 kubelet[2379]: E0515 01:11:30.669154 2379 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"dd0a9c3fa02938f93cf4bf9869fbaf8adb9b2545c5fef93e9870647a4e0bb368\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5" cmd=["/usr/bin/check-status","-r"] May 15 01:11:30.679634 env[1378]: time="2025-05-15T01:11:30.679605502Z" level=error msg="ExecSync for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5 not found: not found" May 15 01:11:30.683776 env[1378]: time="2025-05-15T01:11:30.683753934Z" level=info msg="StartContainer for \"83d8482d9e5494bc7c779c15f409b515adb9a27513ffc5fc460f49b59d8a8ec0\" returns successfully" May 15 01:11:30.685211 env[1378]: time="2025-05-15T01:11:30.685198462Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5090 runtime=io.containerd.runc.v2\n" May 15 01:11:30.685898 env[1378]: time="2025-05-15T01:11:30.685885520Z" level=info msg="StopContainer for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" returns successfully" May 15 01:11:30.688323 kubelet[2379]: E0515 01:11:30.688293 2379 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5 not found: not found" containerID="4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5" cmd=["/usr/bin/check-status","-r"] May 15 01:11:30.690305 env[1378]: time="2025-05-15T01:11:30.690123065Z" level=error msg="ExecSync for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" May 15 01:11:30.690305 env[1378]: time="2025-05-15T01:11:30.690206154Z" level=info msg="StopPodSandbox for \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\"" May 15 01:11:30.690305 env[1378]: time="2025-05-15T01:11:30.690257430Z" level=info msg="Container to stop \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:11:30.691499 kubelet[2379]: E0515 01:11:30.690518 2379 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5" cmd=["/usr/bin/check-status","-r"] May 15 01:11:30.705067 env[1378]: time="2025-05-15T01:11:30.704960051Z" level=info msg="shim disconnected" id=dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba May 15 01:11:30.705067 env[1378]: time="2025-05-15T01:11:30.705074364Z" level=warning msg="cleaning up after shim disconnected" id=dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba namespace=k8s.io May 15 01:11:30.705067 env[1378]: time="2025-05-15T01:11:30.705083815Z" level=info msg="cleaning up dead shim" May 15 01:11:30.710458 env[1378]: time="2025-05-15T01:11:30.710435540Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5128 runtime=io.containerd.runc.v2\n" May 15 01:11:30.736798 kubelet[2379]: I0515 01:11:30.736775 2379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17854bce-67b1-4319-98ae-62f549924452" path="/var/lib/kubelet/pods/17854bce-67b1-4319-98ae-62f549924452/volumes" May 15 01:11:30.738029 kubelet[2379]: I0515 01:11:30.738016 2379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189c4de8-e8f1-4114-a090-fa3fd31b5ca8" path="/var/lib/kubelet/pods/189c4de8-e8f1-4114-a090-fa3fd31b5ca8/volumes" May 15 01:11:30.874454 kubelet[2379]: I0515 01:11:30.871893 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77f75b95c-2fjvp" podStartSLOduration=31.151075123 podStartE2EDuration="34.871879279s" podCreationTimestamp="2025-05-15 01:10:56 +0000 UTC" firstStartedPulling="2025-05-15 01:11:26.568462983 +0000 UTC m=+50.040346065" lastFinishedPulling="2025-05-15 01:11:30.289267127 +0000 UTC m=+53.761150221" observedRunningTime="2025-05-15 01:11:30.679152645 +0000 UTC m=+54.151035739" watchObservedRunningTime="2025-05-15 01:11:30.871879279 +0000 UTC m=+54.343762370" May 15 01:11:30.874454 kubelet[2379]: I0515 01:11:30.871989 2379 topology_manager.go:215] "Topology Admit Handler" podUID="75f836d7-43b6-4685-85d6-3cebea44ef6b" podNamespace="calico-system" podName="calico-typha-dff9dfbcf-k6p8n" May 15 01:11:30.874454 kubelet[2379]: E0515 01:11:30.872031 2379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17854bce-67b1-4319-98ae-62f549924452" containerName="calico-typha" May 15 01:11:30.874454 kubelet[2379]: I0515 01:11:30.872055 2379 memory_manager.go:354] "RemoveStaleState removing state" podUID="17854bce-67b1-4319-98ae-62f549924452" containerName="calico-typha" May 15 01:11:30.906282 kernel: kauditd_printk_skb: 23 callbacks suppressed May 15 01:11:30.911550 kernel: audit: type=1325 audit(1747271490.901:420): table=filter:126 family=2 entries=10 op=nft_register_rule pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:30.911587 kernel: audit: type=1300 audit(1747271490.901:420): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd99812d00 a2=0 a3=7ffd99812cec items=0 ppid=2516 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:30.911608 kernel: audit: type=1327 audit(1747271490.901:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:30.901000 audit[5165]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:30.901000 audit[5165]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd99812d00 a2=0 a3=7ffd99812cec items=0 ppid=2516 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:30.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:30.912229 kubelet[2379]: I0515 01:11:30.909958 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75f836d7-43b6-4685-85d6-3cebea44ef6b-tigera-ca-bundle\") pod \"calico-typha-dff9dfbcf-k6p8n\" (UID: \"75f836d7-43b6-4685-85d6-3cebea44ef6b\") " pod="calico-system/calico-typha-dff9dfbcf-k6p8n" May 15 01:11:30.912229 kubelet[2379]: I0515 01:11:30.909977 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75f836d7-43b6-4685-85d6-3cebea44ef6b-typha-certs\") pod \"calico-typha-dff9dfbcf-k6p8n\" (UID: \"75f836d7-43b6-4685-85d6-3cebea44ef6b\") " pod="calico-system/calico-typha-dff9dfbcf-k6p8n" May 15 01:11:30.912229 kubelet[2379]: I0515 01:11:30.909988 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2wx\" (UniqueName: \"kubernetes.io/projected/75f836d7-43b6-4685-85d6-3cebea44ef6b-kube-api-access-hz2wx\") pod \"calico-typha-dff9dfbcf-k6p8n\" (UID: \"75f836d7-43b6-4685-85d6-3cebea44ef6b\") " pod="calico-system/calico-typha-dff9dfbcf-k6p8n" May 15 01:11:30.911000 audit[5165]: NETFILTER_CFG table=nat:127 family=2 entries=20 op=nft_register_rule pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:30.911000 audit[5165]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd99812d00 a2=0 a3=7ffd99812cec items=0 ppid=2516 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:30.920655 kernel: audit: type=1325 audit(1747271490.911:421): table=nat:127 family=2 entries=20 op=nft_register_rule pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:30.920701 kernel: audit: type=1300 audit(1747271490.911:421): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd99812d00 a2=0 a3=7ffd99812cec items=0 ppid=2516 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:30.920722 kernel: audit: type=1327 audit(1747271490.911:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:30.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:30.925616 kubelet[2379]: I0515 01:11:30.925599 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:31.140462 systemd-networkd[1141]: cali8926d0478d6: Link DOWN May 15 01:11:31.140467 systemd-networkd[1141]: cali8926d0478d6: Lost carrier May 15 01:11:31.213702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba-rootfs.mount: Deactivated successfully. May 15 01:11:31.213813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba-shm.mount: Deactivated successfully. May 15 01:11:31.251076 env[1378]: time="2025-05-15T01:11:31.250668783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dff9dfbcf-k6p8n,Uid:75f836d7-43b6-4685-85d6-3cebea44ef6b,Namespace:calico-system,Attempt:0,}" May 15 01:11:31.272975 env[1378]: time="2025-05-15T01:11:31.272943996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:31.273084 env[1378]: time="2025-05-15T01:11:31.273069922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:31.273147 env[1378]: time="2025-05-15T01:11:31.273134417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:31.273281 env[1378]: time="2025-05-15T01:11:31.273265980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e5ef5f9f61dd0b945cf9691e45a5d0717831dd6ec6daf85ff12334dd357309e pid=5186 runtime=io.containerd.runc.v2 May 15 01:11:31.326787 env[1378]: time="2025-05-15T01:11:31.326755749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dff9dfbcf-k6p8n,Uid:75f836d7-43b6-4685-85d6-3cebea44ef6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e5ef5f9f61dd0b945cf9691e45a5d0717831dd6ec6daf85ff12334dd357309e\"" May 15 01:11:31.401900 env[1378]: time="2025-05-15T01:11:31.401449643Z" level=info msg="CreateContainer within sandbox \"7e5ef5f9f61dd0b945cf9691e45a5d0717831dd6ec6daf85ff12334dd357309e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 01:11:31.440452 env[1378]: time="2025-05-15T01:11:31.440414904Z" level=info msg="CreateContainer within sandbox \"7e5ef5f9f61dd0b945cf9691e45a5d0717831dd6ec6daf85ff12334dd357309e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"20ce2dcd75c07e4d002c4da77f968ae45344e29a13b3a061c7f15c4e7ff390de\"" May 15 01:11:31.460018 env[1378]: time="2025-05-15T01:11:31.459994513Z" level=info msg="StartContainer for \"20ce2dcd75c07e4d002c4da77f968ae45344e29a13b3a061c7f15c4e7ff390de\"" May 15 01:11:31.520817 systemd[1]: Started sshd@8-139.178.70.104:22-14.29.198.130:39708.service. May 15 01:11:31.527887 kernel: audit: type=1130 audit(1747271491.519:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.104:22-14.29.198.130:39708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:31.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.104:22-14.29.198.130:39708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:31.536873 env[1378]: time="2025-05-15T01:11:31.530608250Z" level=info msg="StartContainer for \"20ce2dcd75c07e4d002c4da77f968ae45344e29a13b3a061c7f15c4e7ff390de\" returns successfully" May 15 01:11:31.938000 audit[5258]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:31.942349 kernel: audit: type=1325 audit(1747271491.938:423): table=filter:128 family=2 entries=10 op=nft_register_rule pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:31.938000 audit[5258]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffebfe41fa0 a2=0 a3=7ffebfe41f8c items=0 ppid=2516 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:31.948431 kernel: audit: type=1300 audit(1747271491.938:423): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffebfe41fa0 a2=0 a3=7ffebfe41f8c items=0 ppid=2516 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:31.948830 kernel: audit: type=1327 audit(1747271491.938:423): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:31.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:31.952000 audit[5258]: NETFILTER_CFG table=nat:129 family=2 entries=32 op=nft_register_chain pid=5258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:31.952000 audit[5258]: SYSCALL arch=c000003e syscall=46 success=yes exit=10468 a0=3 a1=7ffebfe41fa0 a2=0 a3=7ffebfe41f8c items=0 ppid=2516 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:31.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:31.954268 kubelet[2379]: I0515 01:11:31.953456 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.131 [INFO][5160] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.136 [INFO][5160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" iface="eth0" netns="/var/run/netns/cni-907da24a-5440-c250-22e9-268087ac21d2" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.136 [INFO][5160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" iface="eth0" netns="/var/run/netns/cni-907da24a-5440-c250-22e9-268087ac21d2" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.150 [INFO][5160] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" after=14.032324ms iface="eth0" netns="/var/run/netns/cni-907da24a-5440-c250-22e9-268087ac21d2" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.150 [INFO][5160] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.150 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.859 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.894 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:31.894 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:32.020 [INFO][5174] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:32.020 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:32.021 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:32.028718 env[1378]: 2025-05-15 01:11:32.025 [INFO][5160] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:32.028718 env[1378]: time="2025-05-15T01:11:32.028701793Z" level=info msg="TearDown network for sandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" successfully" May 15 01:11:32.028718 env[1378]: time="2025-05-15T01:11:32.028721426Z" level=info msg="StopPodSandbox for \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" returns successfully" May 15 01:11:32.057421 env[1378]: time="2025-05-15T01:11:32.057395637Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" May 15 01:11:32.208605 systemd[1]: run-netns-cni\x2d907da24a\x2d5440\x2dc250\x2d22e9\x2d268087ac21d2.mount: Deactivated successfully. May 15 01:11:32.210783 env[1378]: time="2025-05-15T01:11:32.209411390Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:32.218085 env[1378]: time="2025-05-15T01:11:32.218053564Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:32.225260 env[1378]: time="2025-05-15T01:11:32.220048883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:32.225260 env[1378]: time="2025-05-15T01:11:32.222529503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 01:11:32.225260 env[1378]: time="2025-05-15T01:11:32.222693729Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 01:11:32.466217 env[1378]: time="2025-05-15T01:11:32.466147849Z" level=info msg="CreateContainer within sandbox \"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 01:11:32.493740 env[1378]: time="2025-05-15T01:11:32.493692091Z" level=info msg="CreateContainer within sandbox \"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"39ac3d391a4dbb64d08ab1ccb0d651cbc4638e852f20a9f1bd1cee9905ff9bfc\"" May 15 01:11:32.503112 env[1378]: time="2025-05-15T01:11:32.503087129Z" level=info msg="StartContainer for \"39ac3d391a4dbb64d08ab1ccb0d651cbc4638e852f20a9f1bd1cee9905ff9bfc\"" May 15 01:11:32.578036 env[1378]: time="2025-05-15T01:11:32.578012574Z" level=info msg="StartContainer for \"39ac3d391a4dbb64d08ab1ccb0d651cbc4638e852f20a9f1bd1cee9905ff9bfc\" returns successfully" May 15 01:11:32.636229 sshd[5251]: Invalid user teacher from 14.29.198.130 port 39708 May 15 01:11:32.695154 sshd[5251]: pam_faillock(sshd:auth): User unknown May 15 01:11:32.696000 audit[5251]: USER_AUTH pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="teacher" exe="/usr/sbin/sshd" hostname=14.29.198.130 addr=14.29.198.130 terminal=ssh res=failed' May 15 01:11:32.697338 sshd[5251]: pam_unix(sshd:auth): check pass; user unknown May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.293 [WARNING][5275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0", GenerateName:"calico-kube-controllers-77f75b95c-", Namespace:"calico-system", SelfLink:"", UID:"285861ef-806b-4ef3-881e-bc2ab411454c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f75b95c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba", Pod:"calico-kube-controllers-77f75b95c-2fjvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8926d0478d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.293 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.293 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" iface="eth0" netns="" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.293 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.295 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.683 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.686 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.686 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.698 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.698 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.699 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:32.708674 env[1378]: 2025-05-15 01:11:32.701 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:32.708674 env[1378]: time="2025-05-15T01:11:32.703297102Z" level=info msg="TearDown network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" successfully" May 15 01:11:32.708674 env[1378]: time="2025-05-15T01:11:32.703315832Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" returns successfully" May 15 01:11:32.697361 sshd[5251]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.29.198.130 May 15 01:11:32.697769 sshd[5251]: pam_faillock(sshd:auth): User unknown May 15 01:11:32.855947 kubelet[2379]: I0515 01:11:32.840949 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dff9dfbcf-k6p8n" podStartSLOduration=5.800779937 podStartE2EDuration="5.800779937s" podCreationTimestamp="2025-05-15 01:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:11:32.226881705 +0000 UTC m=+55.698764800" watchObservedRunningTime="2025-05-15 01:11:32.800779937 +0000 UTC m=+56.272663025" May 15 01:11:32.863609 kubelet[2379]: I0515 01:11:32.863590 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285861ef-806b-4ef3-881e-bc2ab411454c-tigera-ca-bundle\") pod \"285861ef-806b-4ef3-881e-bc2ab411454c\" (UID: \"285861ef-806b-4ef3-881e-bc2ab411454c\") " May 15 01:11:32.867629 kubelet[2379]: I0515 01:11:32.867607 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4xz2\" (UniqueName: \"kubernetes.io/projected/285861ef-806b-4ef3-881e-bc2ab411454c-kube-api-access-r4xz2\") pod \"285861ef-806b-4ef3-881e-bc2ab411454c\" (UID: \"285861ef-806b-4ef3-881e-bc2ab411454c\") " May 15 01:11:33.112763 kubelet[2379]: I0515 01:11:33.112700 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285861ef-806b-4ef3-881e-bc2ab411454c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "285861ef-806b-4ef3-881e-bc2ab411454c" (UID: "285861ef-806b-4ef3-881e-bc2ab411454c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 01:11:33.127776 kubelet[2379]: I0515 01:11:33.069493 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285861ef-806b-4ef3-881e-bc2ab411454c-kube-api-access-r4xz2" (OuterVolumeSpecName: "kube-api-access-r4xz2") pod "285861ef-806b-4ef3-881e-bc2ab411454c" (UID: "285861ef-806b-4ef3-881e-bc2ab411454c"). InnerVolumeSpecName "kube-api-access-r4xz2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 01:11:33.175861 kubelet[2379]: I0515 01:11:33.175837 2379 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/285861ef-806b-4ef3-881e-bc2ab411454c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 15 01:11:33.176005 kubelet[2379]: I0515 01:11:33.175996 2379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r4xz2\" (UniqueName: \"kubernetes.io/projected/285861ef-806b-4ef3-881e-bc2ab411454c-kube-api-access-r4xz2\") on node \"localhost\" DevicePath \"\"" May 15 01:11:33.208190 systemd[1]: run-containerd-runc-k8s.io-39ac3d391a4dbb64d08ab1ccb0d651cbc4638e852f20a9f1bd1cee9905ff9bfc-runc.WqSQQg.mount: Deactivated successfully. May 15 01:11:33.208303 systemd[1]: var-lib-kubelet-pods-285861ef\x2d806b\x2d4ef3\x2d881e\x2dbc2ab411454c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 15 01:11:33.208371 systemd[1]: var-lib-kubelet-pods-285861ef\x2d806b\x2d4ef3\x2d881e\x2dbc2ab411454c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4xz2.mount: Deactivated successfully. May 15 01:11:33.311058 kubelet[2379]: I0515 01:11:33.311030 2379 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 01:11:33.311830 kubelet[2379]: I0515 01:11:33.311818 2379 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 01:11:33.449611 kubelet[2379]: I0515 01:11:33.449574 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lxhjw" podStartSLOduration=28.233079435 podStartE2EDuration="37.449559172s" podCreationTimestamp="2025-05-15 01:10:56 +0000 UTC" firstStartedPulling="2025-05-15 01:11:23.078770288 +0000 UTC m=+46.550653370" lastFinishedPulling="2025-05-15 01:11:32.295250022 +0000 UTC m=+55.767133107" observedRunningTime="2025-05-15 01:11:33.406461512 +0000 UTC m=+56.878344598" watchObservedRunningTime="2025-05-15 01:11:33.449559172 +0000 UTC m=+56.921442260" May 15 01:11:33.707000 audit[5331]: NETFILTER_CFG table=filter:130 family=2 entries=9 op=nft_register_rule pid=5331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:33.707000 audit[5331]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc0d160720 a2=0 a3=7ffc0d16070c items=0 ppid=2516 pid=5331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:33.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:33.711000 audit[5331]: NETFILTER_CFG table=nat:131 family=2 entries=27 op=nft_register_chain pid=5331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:33.711000 audit[5331]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc0d160720 a2=0 a3=7ffc0d16070c items=0 ppid=2516 pid=5331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:33.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:33.852420 env[1378]: time="2025-05-15T01:11:33.852388516Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/10-calico.conflist\": WRITE)" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 15 01:11:33.874045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83d8482d9e5494bc7c779c15f409b515adb9a27513ffc5fc460f49b59d8a8ec0-rootfs.mount: Deactivated successfully. May 15 01:11:33.876210 env[1378]: time="2025-05-15T01:11:33.876178682Z" level=info msg="shim disconnected" id=83d8482d9e5494bc7c779c15f409b515adb9a27513ffc5fc460f49b59d8a8ec0 May 15 01:11:33.876329 env[1378]: time="2025-05-15T01:11:33.876319000Z" level=warning msg="cleaning up after shim disconnected" id=83d8482d9e5494bc7c779c15f409b515adb9a27513ffc5fc460f49b59d8a8ec0 namespace=k8s.io May 15 01:11:33.876399 env[1378]: time="2025-05-15T01:11:33.876389152Z" level=info msg="cleaning up dead shim" May 15 01:11:33.881600 env[1378]: time="2025-05-15T01:11:33.881570547Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5345 runtime=io.containerd.runc.v2\n" May 15 01:11:34.432937 env[1378]: time="2025-05-15T01:11:34.432909940Z" level=info msg="CreateContainer within sandbox \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 01:11:34.450349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198175536.mount: Deactivated successfully. May 15 01:11:34.467493 env[1378]: time="2025-05-15T01:11:34.467449193Z" level=info msg="CreateContainer within sandbox \"3e56def5ee9969cda74f7981ad7ef412215afeef2c91071f47e405f8eff171d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e054a5c01bf65fcf4d801a27fe363fd883c70324936fa186462eea05da3b1fc\"" May 15 01:11:34.481299 env[1378]: time="2025-05-15T01:11:34.481280715Z" level=info msg="StartContainer for \"7e054a5c01bf65fcf4d801a27fe363fd883c70324936fa186462eea05da3b1fc\"" May 15 01:11:34.537205 env[1378]: time="2025-05-15T01:11:34.537181725Z" level=info msg="StartContainer for \"7e054a5c01bf65fcf4d801a27fe363fd883c70324936fa186462eea05da3b1fc\" returns successfully" May 15 01:11:34.746879 kubelet[2379]: I0515 01:11:34.745371 2379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285861ef-806b-4ef3-881e-bc2ab411454c" path="/var/lib/kubelet/pods/285861ef-806b-4ef3-881e-bc2ab411454c/volumes" May 15 01:11:34.899849 sshd[5251]: Failed password for invalid user teacher from 14.29.198.130 port 39708 ssh2 May 15 01:11:35.072842 kubelet[2379]: I0515 01:11:35.072769 2379 topology_manager.go:215] "Topology Admit Handler" podUID="a8f9bbef-d963-4aa8-acb0-9ae60b1df73b" podNamespace="calico-system" podName="calico-kube-controllers-7dbd7dff85-pc8d6" May 15 01:11:35.079212 kubelet[2379]: E0515 01:11:35.079193 2379 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="285861ef-806b-4ef3-881e-bc2ab411454c" containerName="calico-kube-controllers" May 15 01:11:35.079303 kubelet[2379]: I0515 01:11:35.079228 2379 memory_manager.go:354] "RemoveStaleState removing state" podUID="285861ef-806b-4ef3-881e-bc2ab411454c" containerName="calico-kube-controllers" May 15 01:11:35.216221 kubelet[2379]: I0515 01:11:35.216199 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldbp\" (UniqueName: \"kubernetes.io/projected/a8f9bbef-d963-4aa8-acb0-9ae60b1df73b-kube-api-access-8ldbp\") pod \"calico-kube-controllers-7dbd7dff85-pc8d6\" (UID: \"a8f9bbef-d963-4aa8-acb0-9ae60b1df73b\") " pod="calico-system/calico-kube-controllers-7dbd7dff85-pc8d6" May 15 01:11:35.217850 kubelet[2379]: I0515 01:11:35.217840 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f9bbef-d963-4aa8-acb0-9ae60b1df73b-tigera-ca-bundle\") pod \"calico-kube-controllers-7dbd7dff85-pc8d6\" (UID: \"a8f9bbef-d963-4aa8-acb0-9ae60b1df73b\") " pod="calico-system/calico-kube-controllers-7dbd7dff85-pc8d6" May 15 01:11:35.329832 kubelet[2379]: I0515 01:11:35.329762 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-26ctq" podStartSLOduration=7.329749671 podStartE2EDuration="7.329749671s" podCreationTimestamp="2025-05-15 01:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:11:35.329488694 +0000 UTC m=+58.801371787" watchObservedRunningTime="2025-05-15 01:11:35.329749671 +0000 UTC m=+58.801632759" May 15 01:11:35.392798 env[1378]: time="2025-05-15T01:11:35.392774157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dbd7dff85-pc8d6,Uid:a8f9bbef-d963-4aa8-acb0-9ae60b1df73b,Namespace:calico-system,Attempt:0,}" May 15 01:11:35.609713 sshd[5251]: Received disconnect from 14.29.198.130 port 39708:11: Bye Bye [preauth] May 15 01:11:35.609713 sshd[5251]: Disconnected from invalid user teacher 14.29.198.130 port 39708 [preauth] May 15 01:11:35.648315 systemd[1]: sshd@8-139.178.70.104:22-14.29.198.130:39708.service: Deactivated successfully. May 15 01:11:35.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.104:22-14.29.198.130:39708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:35.888492 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 01:11:35.897764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali64ead7680a7: link becomes ready May 15 01:11:35.897300 systemd-networkd[1141]: cali64ead7680a7: Link UP May 15 01:11:35.897671 systemd-networkd[1141]: cali64ead7680a7: Gained carrier May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.605 [INFO][5440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0 calico-kube-controllers-7dbd7dff85- calico-system a8f9bbef-d963-4aa8-acb0-9ae60b1df73b 1023 0 2025-05-15 01:11:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dbd7dff85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dbd7dff85-pc8d6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali64ead7680a7 [] []}} ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.633 [INFO][5440] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.836 [INFO][5455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" HandleID="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Workload="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.856 [INFO][5455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" HandleID="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Workload="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dbd7dff85-pc8d6", "timestamp":"2025-05-15 01:11:35.836052785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.856 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.856 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.856 [INFO][5455] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.858 [INFO][5455] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.865 [INFO][5455] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.869 [INFO][5455] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.870 [INFO][5455] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.872 [INFO][5455] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.872 [INFO][5455] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.873 [INFO][5455] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.875 [INFO][5455] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.881 [INFO][5455] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.881 [INFO][5455] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" host="localhost" May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.882 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:35.907786 env[1378]: 2025-05-15 01:11:35.882 [INFO][5455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" HandleID="k8s-pod-network.47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Workload="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.908988 env[1378]: 2025-05-15 01:11:35.883 [INFO][5440] cni-plugin/k8s.go 386: Populated endpoint ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0", GenerateName:"calico-kube-controllers-7dbd7dff85-", Namespace:"calico-system", SelfLink:"", UID:"a8f9bbef-d963-4aa8-acb0-9ae60b1df73b", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dbd7dff85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dbd7dff85-pc8d6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64ead7680a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:35.908988 env[1378]: 2025-05-15 01:11:35.883 [INFO][5440] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.136/32] ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.908988 env[1378]: 2025-05-15 01:11:35.883 [INFO][5440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64ead7680a7 ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.908988 env[1378]: 2025-05-15 01:11:35.890 [INFO][5440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.908988 env[1378]: 2025-05-15 01:11:35.891 [INFO][5440] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0", GenerateName:"calico-kube-controllers-7dbd7dff85-", Namespace:"calico-system", SelfLink:"", UID:"a8f9bbef-d963-4aa8-acb0-9ae60b1df73b", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dbd7dff85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c", Pod:"calico-kube-controllers-7dbd7dff85-pc8d6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64ead7680a7", MAC:"ca:32:8e:c6:51:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:35.908988 env[1378]: 2025-05-15 01:11:35.901 [INFO][5440] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c" Namespace="calico-system" Pod="calico-kube-controllers-7dbd7dff85-pc8d6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbd7dff85--pc8d6-eth0" May 15 01:11:35.945023 env[1378]: time="2025-05-15T01:11:35.944986288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:11:35.945114 env[1378]: time="2025-05-15T01:11:35.945028235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:11:35.945114 env[1378]: time="2025-05-15T01:11:35.945049536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:11:35.945289 env[1378]: time="2025-05-15T01:11:35.945269202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c pid=5480 runtime=io.containerd.runc.v2 May 15 01:11:35.972686 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:11:35.994762 env[1378]: time="2025-05-15T01:11:35.994734485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dbd7dff85-pc8d6,Uid:a8f9bbef-d963-4aa8-acb0-9ae60b1df73b,Namespace:calico-system,Attempt:0,} returns sandbox id \"47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c\"" May 15 01:11:36.049299 env[1378]: time="2025-05-15T01:11:36.049271953Z" level=info msg="CreateContainer within sandbox \"47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 01:11:36.054110 env[1378]: time="2025-05-15T01:11:36.054090385Z" level=info msg="CreateContainer within sandbox \"47738cd4541601f5d807941bf29fd9e04b60411496b8e89ef439c571c421657c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a1858665b00d342af3cf256b101292e26a4df1c4c70a0907b3ddae8abba8b222\"" May 15 01:11:36.054494 env[1378]: time="2025-05-15T01:11:36.054482044Z" level=info msg="StartContainer for \"a1858665b00d342af3cf256b101292e26a4df1c4c70a0907b3ddae8abba8b222\"" May 15 01:11:36.096285 env[1378]: time="2025-05-15T01:11:36.096250888Z" level=info msg="StartContainer for \"a1858665b00d342af3cf256b101292e26a4df1c4c70a0907b3ddae8abba8b222\" returns successfully" May 15 01:11:36.264291 kernel: kauditd_printk_skb: 11 callbacks suppressed May 15 01:11:36.266929 kernel: audit: type=1400 audit(1747271496.259:429): avc: denied { write } for pid=5608 comm="tee" name="fd" dev="proc" ino=45174 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.266973 kernel: audit: type=1400 audit(1747271496.261:430): avc: denied { write } for pid=5613 comm="tee" name="fd" dev="proc" ino=44518 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.270611 kernel: audit: type=1300 audit(1747271496.261:430): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0b0bea22 a2=241 a3=1b6 items=1 ppid=5565 pid=5613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.259000 audit[5608]: AVC avc: denied { write } for pid=5608 comm="tee" name="fd" dev="proc" ino=45174 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.261000 audit[5613]: AVC avc: denied { write } for pid=5613 comm="tee" name="fd" dev="proc" ino=44518 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.261000 audit[5613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0b0bea22 a2=241 a3=1b6 items=1 ppid=5565 pid=5613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.280499 kernel: audit: type=1307 audit(1747271496.261:430): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 15 01:11:36.280539 kernel: audit: type=1302 audit(1747271496.261:430): item=0 name="/dev/fd/63" inode=44513 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.261000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 15 01:11:36.261000 audit: PATH item=0 name="/dev/fd/63" inode=44513 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.290653 kernel: audit: type=1327 audit(1747271496.261:430): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.290700 kernel: audit: type=1300 audit(1747271496.259:429): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc44d81a23 a2=241 a3=1b6 items=1 ppid=5557 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.259000 audit[5608]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc44d81a23 a2=241 a3=1b6 items=1 ppid=5557 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.259000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 15 01:11:36.259000 audit: PATH item=0 name="/dev/fd/63" inode=45164 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.302061 kernel: audit: type=1307 audit(1747271496.259:429): cwd="/etc/service/enabled/node-status-reporter/log" May 15 01:11:36.302122 kernel: audit: type=1302 audit(1747271496.259:429): item=0 name="/dev/fd/63" inode=45164 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.307246 kernel: audit: type=1327 audit(1747271496.259:429): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.281000 audit[5617]: AVC avc: denied { write } for pid=5617 comm="tee" name="fd" dev="proc" ino=45181 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.281000 audit[5617]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd323a8a32 a2=241 a3=1b6 items=1 ppid=5559 pid=5617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.281000 audit: CWD cwd="/etc/service/enabled/confd/log" May 15 01:11:36.281000 audit: PATH item=0 name="/dev/fd/63" inode=45171 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.338000 audit[5627]: AVC avc: denied { write } for pid=5627 comm="tee" name="fd" dev="proc" ino=45192 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.338000 audit[5627]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf64eda34 a2=241 a3=1b6 items=1 ppid=5556 pid=5627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.338000 audit: CWD cwd="/etc/service/enabled/cni/log" May 15 01:11:36.338000 audit: PATH item=0 name="/dev/fd/63" inode=44524 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.368000 audit[5625]: AVC avc: denied { write } for pid=5625 comm="tee" name="fd" dev="proc" ino=44527 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.368000 audit[5625]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec8094a32 a2=241 a3=1b6 items=1 ppid=5561 pid=5625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.368000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 15 01:11:36.368000 audit: PATH item=0 name="/dev/fd/63" inode=44523 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.400000 audit[5619]: AVC avc: denied { write } for pid=5619 comm="tee" name="fd" dev="proc" ino=45201 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.400000 audit[5619]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdebab9a32 a2=241 a3=1b6 items=1 ppid=5567 pid=5619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.400000 audit: CWD cwd="/etc/service/enabled/felix/log" May 15 01:11:36.400000 audit: PATH item=0 name="/dev/fd/63" inode=45175 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.401000 audit[5621]: AVC avc: denied { write } for pid=5621 comm="tee" name="fd" dev="proc" ino=44537 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 01:11:36.401000 audit[5621]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff5a652a33 a2=241 a3=1b6 items=1 ppid=5568 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.401000 audit: CWD cwd="/etc/service/enabled/bird/log" May 15 01:11:36.401000 audit: PATH item=0 name="/dev/fd/63" inode=44522 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 01:11:36.401000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 01:11:36.442435 systemd[1]: run-containerd-runc-k8s.io-7e054a5c01bf65fcf4d801a27fe363fd883c70324936fa186462eea05da3b1fc-runc.PLLHqD.mount: Deactivated successfully. May 15 01:11:36.748000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.748000 audit: BPF prog-id=29 op=LOAD May 15 01:11:36.748000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe4385a140 a2=98 a3=3 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.748000 audit: BPF prog-id=29 op=UNLOAD May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit: BPF prog-id=30 op=LOAD May 15 01:11:36.750000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe43859f20 a2=74 a3=540051 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.750000 audit: BPF prog-id=30 op=UNLOAD May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.750000 audit: BPF prog-id=31 op=LOAD May 15 01:11:36.750000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe43859f50 a2=94 a3=2 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.750000 audit: BPF prog-id=31 op=UNLOAD May 15 01:11:36.822000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit: BPF prog-id=32 op=LOAD May 15 01:11:36.822000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe43859e10 a2=40 a3=1 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.822000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.822000 audit: BPF prog-id=32 op=UNLOAD May 15 01:11:36.822000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.822000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe43859ee0 a2=50 a3=7ffe43859fc0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.822000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe43859e20 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe43859e50 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe43859d60 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe43859e70 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe43859e50 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe43859e40 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe43859e70 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe43859e50 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe43859e70 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe43859e40 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe43859eb0 a2=28 a3=0 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe43859c60 a2=50 a3=1 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit: BPF prog-id=33 op=LOAD May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe43859c60 a2=94 a3=5 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit: BPF prog-id=33 op=UNLOAD May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe43859d10 a2=50 a3=1 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe43859e30 a2=4 a3=38 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { confidentiality } for pid=5713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe43859e80 a2=94 a3=6 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { confidentiality } for pid=5713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe43859630 a2=94 a3=83 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { perfmon } for pid=5713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { bpf } for pid=5713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.830000 audit[5713]: AVC avc: denied { confidentiality } for pid=5713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:36.830000 audit[5713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe43859630 a2=94 a3=83 items=0 ppid=5569 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.830000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 01:11:36.874000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.874000 audit: BPF prog-id=34 op=LOAD May 15 01:11:36.874000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd6a460c0 a2=98 a3=1999999999999999 items=0 ppid=5569 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.874000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 01:11:36.875000 audit: BPF prog-id=34 op=UNLOAD May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit: BPF prog-id=35 op=LOAD May 15 01:11:36.875000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd6a45fa0 a2=74 a3=ffff items=0 ppid=5569 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 01:11:36.875000 audit: BPF prog-id=35 op=UNLOAD May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { perfmon } for pid=5717 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit[5717]: AVC avc: denied { bpf } for pid=5717 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.875000 audit: BPF prog-id=36 op=LOAD May 15 01:11:36.875000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd6a45fe0 a2=40 a3=7ffcd6a461c0 items=0 ppid=5569 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.875000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 01:11:36.875000 audit: BPF prog-id=36 op=UNLOAD May 15 01:11:36.943000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.943000 audit: BPF prog-id=37 op=LOAD May 15 01:11:36.943000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff123236f0 a2=98 a3=ffffffff items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.943000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.944000 audit: BPF prog-id=37 op=UNLOAD May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit: BPF prog-id=38 op=LOAD May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff12323500 a2=74 a3=540051 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit: BPF prog-id=38 op=UNLOAD May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit: BPF prog-id=39 op=LOAD May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff12323530 a2=94 a3=2 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit: BPF prog-id=39 op=UNLOAD May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff12323400 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12323430 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12323340 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff12323450 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff12323430 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff12323420 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff12323450 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12323430 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12323450 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff12323420 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff12323490 a2=28 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit: BPF prog-id=40 op=LOAD May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12323300 a2=40 a3=0 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit: BPF prog-id=40 op=UNLOAD May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fff123232f0 a2=50 a3=2800 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fff123232f0 a2=50 a3=2800 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit: BPF prog-id=41 op=LOAD May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12322b10 a2=94 a3=2 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.945000 audit: BPF prog-id=41 op=UNLOAD May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { perfmon } for pid=5741 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit[5741]: AVC avc: denied { bpf } for pid=5741 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.945000 audit: BPF prog-id=42 op=LOAD May 15 01:11:36.945000 audit[5741]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff12322c10 a2=94 a3=30 items=0 ppid=5569 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.945000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit: BPF prog-id=43 op=LOAD May 15 01:11:36.949000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce63f67f0 a2=98 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.949000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:36.949000 audit: BPF prog-id=43 op=UNLOAD May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit: BPF prog-id=44 op=LOAD May 15 01:11:36.949000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce63f65d0 a2=74 a3=540051 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.949000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:36.949000 audit: BPF prog-id=44 op=UNLOAD May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:36.949000 audit: BPF prog-id=45 op=LOAD May 15 01:11:36.949000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce63f6600 a2=94 a3=2 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:36.949000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:36.949000 audit: BPF prog-id=45 op=UNLOAD May 15 01:11:37.028000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit: BPF prog-id=46 op=LOAD May 15 01:11:37.028000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce63f64c0 a2=40 a3=1 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.028000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.028000 audit: BPF prog-id=46 op=UNLOAD May 15 01:11:37.028000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.028000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffce63f6590 a2=50 a3=7ffce63f6670 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.028000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffce63f64d0 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce63f6500 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce63f6410 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffce63f6520 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffce63f6500 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffce63f64f0 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffce63f6520 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce63f6500 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce63f6520 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffce63f64f0 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffce63f6560 a2=28 a3=0 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffce63f6310 a2=50 a3=1 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit: BPF prog-id=47 op=LOAD May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce63f6310 a2=94 a3=5 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit: BPF prog-id=47 op=UNLOAD May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffce63f63c0 a2=50 a3=1 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffce63f64e0 a2=4 a3=38 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.038000 audit[5743]: AVC avc: denied { confidentiality } for pid=5743 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:37.038000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffce63f6530 a2=94 a3=6 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.038000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { confidentiality } for pid=5743 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:37.039000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffce63f5ce0 a2=94 a3=83 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.039000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { perfmon } for pid=5743 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { confidentiality } for pid=5743 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 01:11:37.039000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffce63f5ce0 a2=94 a3=83 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.039000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffce63f7720 a2=10 a3=f1f00800 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.039000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffce63f75c0 a2=10 a3=3 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.039000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffce63f7560 a2=10 a3=3 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.039000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.039000 audit[5743]: AVC avc: denied { bpf } for pid=5743 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 01:11:37.039000 audit[5743]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffce63f7560 a2=10 a3=7 items=0 ppid=5569 pid=5743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.039000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 01:11:37.048000 audit: BPF prog-id=42 op=UNLOAD May 15 01:11:37.078136 kubelet[2379]: I0515 01:11:37.078116 2379 scope.go:117] "RemoveContainer" containerID="4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5" May 15 01:11:37.085582 env[1378]: time="2025-05-15T01:11:37.085439344Z" level=info msg="RemoveContainer for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\"" May 15 01:11:37.103939 env[1378]: time="2025-05-15T01:11:37.103917900Z" level=info msg="RemoveContainer for \"4a00d3c0f50b7661bf645aaae000013e4a0995903ee320f9403ef65f8c746ea5\" returns successfully" May 15 01:11:37.105370 env[1378]: time="2025-05-15T01:11:37.105357520Z" level=info msg="StopPodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\"" May 15 01:11:37.154000 audit[5791]: NETFILTER_CFG table=filter:132 family=2 entries=76 op=nft_register_chain pid=5791 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:37.154000 audit[5791]: SYSCALL arch=c000003e syscall=46 success=yes exit=26232 a0=3 a1=7ffecca09450 a2=0 a3=7ffecca0943c items=0 ppid=5569 pid=5791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.154000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:37.155000 audit[5791]: NETFILTER_CFG table=filter:133 family=2 entries=2 op=nft_unregister_chain pid=5791 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:11:37.155000 audit[5791]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffecca09450 a2=0 a3=55f5b7975000 items=0 ppid=5569 pid=5791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:37.155000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:11:37.762163 systemd-networkd[1141]: cali64ead7680a7: Gained IPv6LL May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:37.504 [WARNING][5801] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b6acc2c-032c-40b8-85fc-0967a7269b9b", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805", Pod:"coredns-7db6d8ff4d-frhnb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57be6c0c532", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:37.506 [INFO][5801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:37.506 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" iface="eth0" netns="" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:37.506 [INFO][5801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:37.506 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.078 [INFO][5828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.084 [INFO][5828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.085 [INFO][5828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.102 [WARNING][5828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.102 [INFO][5828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.105 [INFO][5828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.107881 env[1378]: 2025-05-15 01:11:38.106 [INFO][5801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.114661 env[1378]: time="2025-05-15T01:11:38.108204447Z" level=info msg="TearDown network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" successfully" May 15 01:11:38.114661 env[1378]: time="2025-05-15T01:11:38.108225539Z" level=info msg="StopPodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" returns successfully" May 15 01:11:38.145417 env[1378]: time="2025-05-15T01:11:38.145389850Z" level=info msg="RemovePodSandbox for \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\"" May 15 01:11:38.145530 env[1378]: time="2025-05-15T01:11:38.145422818Z" level=info msg="Forcibly stopping sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\"" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.222 [WARNING][5847] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3b6acc2c-032c-40b8-85fc-0967a7269b9b", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da111d04a82008c05d182dad3327de96a4318cc0593145494709a8988dd37805", Pod:"coredns-7db6d8ff4d-frhnb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57be6c0c532", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.223 [INFO][5847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.224 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" iface="eth0" netns="" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.224 [INFO][5847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.224 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.282 [INFO][5854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.282 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.282 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.285 [WARNING][5854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.285 [INFO][5854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" HandleID="k8s-pod-network.cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" Workload="localhost-k8s-coredns--7db6d8ff4d--frhnb-eth0" May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.286 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.289495 env[1378]: 2025-05-15 01:11:38.288 [INFO][5847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2" May 15 01:11:38.292729 env[1378]: time="2025-05-15T01:11:38.289738527Z" level=info msg="TearDown network for sandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" successfully" May 15 01:11:38.293313 env[1378]: time="2025-05-15T01:11:38.293300535Z" level=info msg="RemovePodSandbox \"cbd85d684d51b6749362dddf82454be28844233edcd09d2e28689ddbc70f04c2\" returns successfully" May 15 01:11:38.298136 env[1378]: time="2025-05-15T01:11:38.298123936Z" level=info msg="StopPodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\"" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.332 [WARNING][5875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e99f5774-1995-4a8c-89ab-22334a39be19", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75", Pod:"coredns-7db6d8ff4d-5wxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67f0c615d95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.332 [INFO][5875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.332 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" iface="eth0" netns="" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.332 [INFO][5875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.332 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.350 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.350 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.350 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.354 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.354 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.355 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.359173 env[1378]: 2025-05-15 01:11:38.357 [INFO][5875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.362074 env[1378]: time="2025-05-15T01:11:38.359307566Z" level=info msg="TearDown network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" successfully" May 15 01:11:38.362074 env[1378]: time="2025-05-15T01:11:38.359327602Z" level=info msg="StopPodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" returns successfully" May 15 01:11:38.362074 env[1378]: time="2025-05-15T01:11:38.361443250Z" level=info msg="RemovePodSandbox for \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\"" May 15 01:11:38.362074 env[1378]: time="2025-05-15T01:11:38.361462637Z" level=info msg="Forcibly stopping sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\"" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.398 [WARNING][5903] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e99f5774-1995-4a8c-89ab-22334a39be19", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efeed05b15db490e729b5523757d0e2b34198e4f1cf59c307203fbe01c67fa75", Pod:"coredns-7db6d8ff4d-5wxnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67f0c615d95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.399 [INFO][5903] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.399 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" iface="eth0" netns="" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.399 [INFO][5903] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.399 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.431 [INFO][5910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.431 [INFO][5910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.431 [INFO][5910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.435 [WARNING][5910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.435 [INFO][5910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" HandleID="k8s-pod-network.aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" Workload="localhost-k8s-coredns--7db6d8ff4d--5wxnk-eth0" May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.435 [INFO][5910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.438225 env[1378]: 2025-05-15 01:11:38.437 [INFO][5903] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.438358197Z" level=info msg="TearDown network for sandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" successfully" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.453533991Z" level=info msg="RemovePodSandbox \"aba699e091b3fbc0a51befd10ea03392ac5c005586eb80fc0c2c64c16cf6bf63\" returns successfully" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.453835636Z" level=info msg="StopPodSandbox for \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\"" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.453892530Z" level=info msg="TearDown network for sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" successfully" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.453914812Z" level=info msg="StopPodSandbox for \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" returns successfully" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.454071724Z" level=info msg="RemovePodSandbox for \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\"" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.454129899Z" level=info msg="Forcibly stopping sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\"" May 15 01:11:38.454909 env[1378]: time="2025-05-15T01:11:38.454172217Z" level=info msg="TearDown network for sandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" successfully" May 15 01:11:38.456607 env[1378]: time="2025-05-15T01:11:38.456593832Z" level=info msg="RemovePodSandbox \"576f3f72087b6ae522a58769ee25468e422c62bd353cea71e52ca329b280f4f7\" returns successfully" May 15 01:11:38.456802 env[1378]: time="2025-05-15T01:11:38.456785868Z" level=info msg="StopPodSandbox for \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\"" May 15 01:11:38.456843 env[1378]: time="2025-05-15T01:11:38.456824688Z" level=info msg="TearDown network for sandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" successfully" May 15 01:11:38.456870 env[1378]: time="2025-05-15T01:11:38.456842396Z" level=info msg="StopPodSandbox for \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" returns successfully" May 15 01:11:38.457006 env[1378]: time="2025-05-15T01:11:38.456994692Z" level=info msg="RemovePodSandbox for \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\"" May 15 01:11:38.457208 env[1378]: time="2025-05-15T01:11:38.457093486Z" level=info msg="Forcibly stopping sandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\"" May 15 01:11:38.457208 env[1378]: time="2025-05-15T01:11:38.457131303Z" level=info msg="TearDown network for sandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" successfully" May 15 01:11:38.459207 env[1378]: time="2025-05-15T01:11:38.459195257Z" level=info msg="RemovePodSandbox \"ee5a4f2957b9c969ad30eb55791d80e8e65a89c8a18a5713e362eb5424fbfa5f\" returns successfully" May 15 01:11:38.459397 env[1378]: time="2025-05-15T01:11:38.459381349Z" level=info msg="StopPodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\"" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.494 [WARNING][5928] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab75af42-a771-4175-8a6f-81471c06a1c4", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee", Pod:"calico-apiserver-68cd77bbfb-stq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie82da44cbca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.494 [INFO][5928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.494 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" iface="eth0" netns="" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.494 [INFO][5928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.494 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.510 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.510 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.510 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.514 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.514 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.514 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.517930 env[1378]: 2025-05-15 01:11:38.516 [INFO][5928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.519498 env[1378]: time="2025-05-15T01:11:38.517949055Z" level=info msg="TearDown network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" successfully" May 15 01:11:38.519498 env[1378]: time="2025-05-15T01:11:38.517970306Z" level=info msg="StopPodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" returns successfully" May 15 01:11:38.519498 env[1378]: time="2025-05-15T01:11:38.519342159Z" level=info msg="RemovePodSandbox for \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\"" May 15 01:11:38.519498 env[1378]: time="2025-05-15T01:11:38.519361102Z" level=info msg="Forcibly stopping sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\"" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.551 [WARNING][5955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab75af42-a771-4175-8a6f-81471c06a1c4", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee", Pod:"calico-apiserver-68cd77bbfb-stq89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie82da44cbca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.551 [INFO][5955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.551 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" iface="eth0" netns="" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.551 [INFO][5955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.551 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.564 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.564 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.564 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.569 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.569 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" HandleID="k8s-pod-network.67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.572 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.576037 env[1378]: 2025-05-15 01:11:38.574 [INFO][5955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb" May 15 01:11:38.576498 env[1378]: time="2025-05-15T01:11:38.576061251Z" level=info msg="TearDown network for sandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" successfully" May 15 01:11:38.581278 env[1378]: time="2025-05-15T01:11:38.578935571Z" level=info msg="RemovePodSandbox \"67f268f49ffff0fb4b1a24e49dc93f0c5b2b616e7ecfc2fcfd518117436170bb\" returns successfully" May 15 01:11:38.582114 env[1378]: time="2025-05-15T01:11:38.582098207Z" level=info msg="StopPodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\"" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.605 [WARNING][5983] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a465e0d0-b1f5-4d68-a5ea-fce28821f59f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0", Pod:"calico-apiserver-68cd77bbfb-srwdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2bddc7410", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.605 [INFO][5983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.605 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" iface="eth0" netns="" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.605 [INFO][5983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.605 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.626 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.626 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.626 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.632 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.632 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.632 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.635669 env[1378]: 2025-05-15 01:11:38.634 [INFO][5983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.636225 env[1378]: time="2025-05-15T01:11:38.636191007Z" level=info msg="TearDown network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" successfully" May 15 01:11:38.636311 env[1378]: time="2025-05-15T01:11:38.636299397Z" level=info msg="StopPodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" returns successfully" May 15 01:11:38.636819 env[1378]: time="2025-05-15T01:11:38.636791684Z" level=info msg="RemovePodSandbox for \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\"" May 15 01:11:38.638181 env[1378]: time="2025-05-15T01:11:38.636906244Z" level=info msg="Forcibly stopping sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\"" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.658 [WARNING][6009] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0", GenerateName:"calico-apiserver-68cd77bbfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a465e0d0-b1f5-4d68-a5ea-fce28821f59f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cd77bbfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0", Pod:"calico-apiserver-68cd77bbfb-srwdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2bddc7410", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.658 [INFO][6009] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.659 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" iface="eth0" netns="" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.659 [INFO][6009] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.659 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.686 [INFO][6017] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.686 [INFO][6017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.686 [INFO][6017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.691 [WARNING][6017] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.691 [INFO][6017] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" HandleID="k8s-pod-network.c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.691 [INFO][6017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.698906 env[1378]: 2025-05-15 01:11:38.693 [INFO][6009] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83" May 15 01:11:38.703610 env[1378]: time="2025-05-15T01:11:38.699349391Z" level=info msg="TearDown network for sandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" successfully" May 15 01:11:38.707677 env[1378]: time="2025-05-15T01:11:38.707650905Z" level=info msg="RemovePodSandbox \"c4d4bbbf33ac33bf0e2b08b160045cc1ffeb27a7a5532631eaf03426577b0c83\" returns successfully" May 15 01:11:38.708208 env[1378]: time="2025-05-15T01:11:38.708148885Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.736 [WARNING][6038] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.736 [INFO][6038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.736 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" iface="eth0" netns="" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.736 [INFO][6038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.736 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.758 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.759 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.759 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.763 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.763 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.764 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.769182 env[1378]: 2025-05-15 01:11:38.765 [INFO][6038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.770050 env[1378]: time="2025-05-15T01:11:38.770030247Z" level=info msg="TearDown network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" successfully" May 15 01:11:38.770130 env[1378]: time="2025-05-15T01:11:38.770093506Z" level=info msg="StopPodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" returns successfully" May 15 01:11:38.770802 env[1378]: time="2025-05-15T01:11:38.770789310Z" level=info msg="RemovePodSandbox for \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" May 15 01:11:38.770905 env[1378]: time="2025-05-15T01:11:38.770882212Z" level=info msg="Forcibly stopping sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\"" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.800 [WARNING][6064] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.800 [INFO][6064] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.800 [INFO][6064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" iface="eth0" netns="" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.800 [INFO][6064] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.800 [INFO][6064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.817 [INFO][6071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.817 [INFO][6071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.817 [INFO][6071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.829 [WARNING][6071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.829 [INFO][6071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" HandleID="k8s-pod-network.70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.831 [INFO][6071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.835694 env[1378]: 2025-05-15 01:11:38.832 [INFO][6064] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a" May 15 01:11:38.836996 env[1378]: time="2025-05-15T01:11:38.835711399Z" level=info msg="TearDown network for sandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" successfully" May 15 01:11:38.838680 env[1378]: time="2025-05-15T01:11:38.838644090Z" level=info msg="RemovePodSandbox \"70c74e4e30c614401701044c55217347ab9dccdf0e6d3c14ac7208ee55945d7a\" returns successfully" May 15 01:11:38.838920 env[1378]: time="2025-05-15T01:11:38.838907296Z" level=info msg="StopPodSandbox for \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\"" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.878 [WARNING][6090] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.878 [INFO][6090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.878 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" iface="eth0" netns="" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.878 [INFO][6090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.878 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.895 [INFO][6097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.895 [INFO][6097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.895 [INFO][6097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.899 [WARNING][6097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.899 [INFO][6097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.900 [INFO][6097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.902772 env[1378]: 2025-05-15 01:11:38.901 [INFO][6090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.903306 env[1378]: time="2025-05-15T01:11:38.903283013Z" level=info msg="TearDown network for sandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" successfully" May 15 01:11:38.903387 env[1378]: time="2025-05-15T01:11:38.903371906Z" level=info msg="StopPodSandbox for \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" returns successfully" May 15 01:11:38.904143 env[1378]: time="2025-05-15T01:11:38.904110684Z" level=info msg="RemovePodSandbox for \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\"" May 15 01:11:38.904992 env[1378]: time="2025-05-15T01:11:38.904148152Z" level=info msg="Forcibly stopping sandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\"" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.927 [WARNING][6115] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.927 [INFO][6115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.927 [INFO][6115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" iface="eth0" netns="" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.927 [INFO][6115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.927 [INFO][6115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.949 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.949 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.949 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.954 [WARNING][6122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.954 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" HandleID="k8s-pod-network.dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" Workload="localhost-k8s-calico--kube--controllers--77f75b95c--2fjvp-eth0" May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.955 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:38.957361 env[1378]: 2025-05-15 01:11:38.956 [INFO][6115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba" May 15 01:11:38.957732 env[1378]: time="2025-05-15T01:11:38.957711580Z" level=info msg="TearDown network for sandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" successfully" May 15 01:11:39.005498 env[1378]: time="2025-05-15T01:11:39.005463121Z" level=info msg="RemovePodSandbox \"dfbb01742a77fce2783a055bc57719ed25021c23267f801623d8cd9ae46f5dba\" returns successfully" May 15 01:11:39.005935 env[1378]: time="2025-05-15T01:11:39.005922394Z" level=info msg="StopPodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\"" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.058 [WARNING][6140] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0", GenerateName:"calico-apiserver-554b8879-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ec7ac22-58ee-4832-ade2-3b509e93036d", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b8879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f", Pod:"calico-apiserver-554b8879-bsbbc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9c6b6a0d48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.058 [INFO][6140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.058 [INFO][6140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" iface="eth0" netns="" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.058 [INFO][6140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.058 [INFO][6140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.077 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.077 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.077 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.080 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.080 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.081 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:39.083465 env[1378]: 2025-05-15 01:11:39.082 [INFO][6140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.084978 env[1378]: time="2025-05-15T01:11:39.083821077Z" level=info msg="TearDown network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" successfully" May 15 01:11:39.084978 env[1378]: time="2025-05-15T01:11:39.083843242Z" level=info msg="StopPodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" returns successfully" May 15 01:11:39.084978 env[1378]: time="2025-05-15T01:11:39.084162677Z" level=info msg="RemovePodSandbox for \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\"" May 15 01:11:39.084978 env[1378]: time="2025-05-15T01:11:39.084179572Z" level=info msg="Forcibly stopping sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\"" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.105 [WARNING][6166] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0", GenerateName:"calico-apiserver-554b8879-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ec7ac22-58ee-4832-ade2-3b509e93036d", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b8879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5943913ecef59acefb37df01d999c2b6869355e709e49629d435b89f889ab1f", Pod:"calico-apiserver-554b8879-bsbbc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9c6b6a0d48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.105 [INFO][6166] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.105 [INFO][6166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" iface="eth0" netns="" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.105 [INFO][6166] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.105 [INFO][6166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.132 [INFO][6173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.132 [INFO][6173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.132 [INFO][6173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.135 [WARNING][6173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.135 [INFO][6173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" HandleID="k8s-pod-network.dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" Workload="localhost-k8s-calico--apiserver--554b8879--bsbbc-eth0" May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.136 [INFO][6173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:39.138347 env[1378]: 2025-05-15 01:11:39.137 [INFO][6166] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28" May 15 01:11:39.139691 env[1378]: time="2025-05-15T01:11:39.138833313Z" level=info msg="TearDown network for sandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" successfully" May 15 01:11:39.140964 env[1378]: time="2025-05-15T01:11:39.140951231Z" level=info msg="RemovePodSandbox \"dfb1b4e8e59d8ce2474a15b1b926acafe78f83dfb1dbd77dfb8e2ce967334c28\" returns successfully" May 15 01:11:39.141326 env[1378]: time="2025-05-15T01:11:39.141296613Z" level=info msg="StopPodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\"" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.172 [WARNING][6191] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxhjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268", Pod:"csi-node-driver-lxhjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2df672251aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.172 [INFO][6191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.172 [INFO][6191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" iface="eth0" netns="" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.172 [INFO][6191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.172 [INFO][6191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.189 [INFO][6198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.190 [INFO][6198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.190 [INFO][6198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.196 [WARNING][6198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.196 [INFO][6198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.197 [INFO][6198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:39.200521 env[1378]: 2025-05-15 01:11:39.199 [INFO][6191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.204634 env[1378]: time="2025-05-15T01:11:39.200933577Z" level=info msg="TearDown network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" successfully" May 15 01:11:39.204634 env[1378]: time="2025-05-15T01:11:39.200955007Z" level=info msg="StopPodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" returns successfully" May 15 01:11:39.204634 env[1378]: time="2025-05-15T01:11:39.201279640Z" level=info msg="RemovePodSandbox for \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\"" May 15 01:11:39.204634 env[1378]: time="2025-05-15T01:11:39.201295905Z" level=info msg="Forcibly stopping sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\"" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.225 [WARNING][6216] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxhjw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d6d22d8-b16d-4534-aecc-a2e4cc8f60f5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8533be6fd8db471f0ba71d405337161e3bb4687a2cd141672f27f941f70e268", Pod:"csi-node-driver-lxhjw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2df672251aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.225 [INFO][6216] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.225 [INFO][6216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" iface="eth0" netns="" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.225 [INFO][6216] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.225 [INFO][6216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.251 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.251 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.251 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.258 [WARNING][6223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.258 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" HandleID="k8s-pod-network.d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" Workload="localhost-k8s-csi--node--driver--lxhjw-eth0" May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.260 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:11:39.268432 env[1378]: 2025-05-15 01:11:39.263 [INFO][6216] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179" May 15 01:11:39.271422 env[1378]: time="2025-05-15T01:11:39.268561062Z" level=info msg="TearDown network for sandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" successfully" May 15 01:11:39.271668 env[1378]: time="2025-05-15T01:11:39.271652598Z" level=info msg="RemovePodSandbox \"d8307da51908c4e6d76b03a0d473a3311498004517b54b256214d82a455a2179\" returns successfully" May 15 01:11:53.464781 systemd[1]: Started sshd@9-139.178.70.104:22-194.0.234.16:17282.service. May 15 01:11:53.482301 kernel: kauditd_printk_skb: 499 callbacks suppressed May 15 01:11:53.487758 kernel: audit: type=1130 audit(1747271513.466:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.104:22-194.0.234.16:17282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:53.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.104:22-194.0.234.16:17282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:55.317298 sshd[6252]: Connection closed by authenticating user operator 194.0.234.16 port 17282 [preauth] May 15 01:11:55.338146 kernel: audit: type=1109 audit(1747271515.316:530): pid=6252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/sbin/sshd" hostname=194.0.234.16 addr=194.0.234.16 terminal=ssh res=failed' May 15 01:11:55.346104 kernel: audit: type=1131 audit(1747271515.325:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.104:22-194.0.234.16:17282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:55.316000 audit[6252]: USER_ERR pid=6252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/sbin/sshd" hostname=194.0.234.16 addr=194.0.234.16 terminal=ssh res=failed' May 15 01:11:55.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.104:22-194.0.234.16:17282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:55.326137 systemd[1]: sshd@9-139.178.70.104:22-194.0.234.16:17282.service: Deactivated successfully. May 15 01:11:57.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.104:22-186.56.11.17:49356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:57.249101 systemd[1]: Started sshd@10-139.178.70.104:22-186.56.11.17:49356.service. May 15 01:11:57.267680 kernel: audit: type=1130 audit(1747271517.247:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.104:22-186.56.11.17:49356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:58.062852 kubelet[2379]: I0515 01:11:58.057820 2379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 01:11:58.082284 kernel: audit: type=1130 audit(1747271518.062:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.104:22-147.75.109.163:50274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:58.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.104:22-147.75.109.163:50274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:11:58.063110 systemd[1]: Started sshd@11-139.178.70.104:22-147.75.109.163:50274.service. May 15 01:11:58.299060 kubelet[2379]: I0515 01:11:58.290770 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dbd7dff85-pc8d6" podStartSLOduration=25.286483252 podStartE2EDuration="25.286483252s" podCreationTimestamp="2025-05-15 01:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:11:36.510842929 +0000 UTC m=+59.982726025" watchObservedRunningTime="2025-05-15 01:11:58.286483252 +0000 UTC m=+81.758366346" May 15 01:11:58.351000 audit[6266]: USER_ACCT pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.355516 kernel: audit: type=1101 audit(1747271518.351:534): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.355588 sshd[6266]: Accepted publickey for core from 147.75.109.163 port 50274 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:11:58.367012 kernel: audit: type=1103 audit(1747271518.356:535): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.369271 kernel: audit: type=1006 audit(1747271518.356:536): pid=6266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 15 01:11:58.375804 kernel: audit: type=1300 audit(1747271518.356:536): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7ff7f300 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.375837 kernel: audit: type=1327 audit(1747271518.356:536): proctitle=737368643A20636F7265205B707269765D May 15 01:11:58.356000 audit[6266]: CRED_ACQ pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.356000 audit[6266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7ff7f300 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.356000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:11:58.377334 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:11:58.475525 systemd[1]: Started session-10.scope. May 15 01:11:58.476158 systemd-logind[1361]: New session 10 of user core. May 15 01:11:58.497165 kernel: audit: type=1105 audit(1747271518.483:537): pid=6266 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.498556 kernel: audit: type=1103 audit(1747271518.491:538): pid=6272 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.483000 audit[6266]: USER_START pid=6266 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.491000 audit[6272]: CRED_ACQ pid=6272 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:11:58.511269 kubelet[2379]: I0515 01:11:58.511210 2379 topology_manager.go:215] "Topology Admit Handler" podUID="d8a192bf-827c-4f9a-8033-eb76e1b29b9f" podNamespace="calico-apiserver" podName="calico-apiserver-554b8879-bdlb2" May 15 01:11:58.677000 audit[6278]: NETFILTER_CFG table=filter:134 family=2 entries=8 op=nft_register_rule pid=6278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.691645 kernel: audit: type=1325 audit(1747271518.677:539): table=filter:134 family=2 entries=8 op=nft_register_rule pid=6278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.691695 kernel: audit: type=1300 audit(1747271518.677:539): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe2b1a1d70 a2=0 a3=7ffe2b1a1d5c items=0 ppid=2516 pid=6278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.691718 kernel: audit: type=1327 audit(1747271518.677:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:58.691732 kernel: audit: type=1325 audit(1747271518.687:540): table=nat:135 family=2 entries=38 op=nft_register_chain pid=6278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.691750 kernel: audit: type=1300 audit(1747271518.687:540): arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffe2b1a1d70 a2=0 a3=7ffe2b1a1d5c items=0 ppid=2516 pid=6278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.677000 audit[6278]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe2b1a1d70 a2=0 a3=7ffe2b1a1d5c items=0 ppid=2516 pid=6278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.696975 kernel: audit: type=1327 audit(1747271518.687:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:58.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:58.687000 audit[6278]: NETFILTER_CFG table=nat:135 family=2 entries=38 op=nft_register_chain pid=6278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.701733 kernel: audit: type=1100 audit(1747271518.696:541): pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=186.56.11.17 addr=186.56.11.17 terminal=ssh res=failed' May 15 01:11:58.687000 audit[6278]: SYSCALL arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffe2b1a1d70 a2=0 a3=7ffe2b1a1d5c items=0 ppid=2516 pid=6278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:58.696000 audit[6259]: USER_AUTH pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=186.56.11.17 addr=186.56.11.17 terminal=ssh res=failed' May 15 01:11:58.697215 sshd[6259]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=186.56.11.17 user=root May 15 01:11:58.719000 audit[6280]: NETFILTER_CFG table=filter:136 family=2 entries=8 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.719000 audit[6280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff0b323a00 a2=0 a3=7fff0b3239ec items=0 ppid=2516 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:58.724248 kernel: audit: type=1325 audit(1747271518.719:542): table=filter:136 family=2 entries=8 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.729000 audit[6280]: NETFILTER_CFG table=nat:137 family=2 entries=22 op=nft_register_rule pid=6280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:11:58.729000 audit[6280]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff0b323a00 a2=0 a3=7fff0b3239ec items=0 ppid=2516 pid=6280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:11:58.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:11:58.940517 env[1378]: time="2025-05-15T01:11:58.939681475Z" level=info msg="StopContainer for \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\" with timeout 30 (s)" May 15 01:11:58.943650 env[1378]: time="2025-05-15T01:11:58.942943399Z" level=info msg="Stop container \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\" with signal terminated" May 15 01:11:58.986550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2-rootfs.mount: Deactivated successfully. May 15 01:11:59.006035 env[1378]: time="2025-05-15T01:11:59.005979696Z" level=info msg="shim disconnected" id=7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2 May 15 01:11:59.006232 env[1378]: time="2025-05-15T01:11:59.006165068Z" level=warning msg="cleaning up after shim disconnected" id=7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2 namespace=k8s.io May 15 01:11:59.006307 env[1378]: time="2025-05-15T01:11:59.006297607Z" level=info msg="cleaning up dead shim" May 15 01:11:59.013084 env[1378]: time="2025-05-15T01:11:59.013059241Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6303 runtime=io.containerd.runc.v2\n" May 15 01:11:59.029927 env[1378]: time="2025-05-15T01:11:59.029811064Z" level=info msg="StopContainer for \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\" returns successfully" May 15 01:11:59.279303 env[1378]: time="2025-05-15T01:11:59.276088311Z" level=info msg="StopPodSandbox for \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\"" May 15 01:11:59.279303 env[1378]: time="2025-05-15T01:11:59.276143072Z" level=info msg="Container to stop \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:11:59.278843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0-shm.mount: Deactivated successfully. May 15 01:11:59.298975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0-rootfs.mount: Deactivated successfully. May 15 01:11:59.300383 env[1378]: time="2025-05-15T01:11:59.300360815Z" level=info msg="shim disconnected" id=7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0 May 15 01:11:59.300469 env[1378]: time="2025-05-15T01:11:59.300457199Z" level=warning msg="cleaning up after shim disconnected" id=7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0 namespace=k8s.io May 15 01:11:59.300547 env[1378]: time="2025-05-15T01:11:59.300537307Z" level=info msg="cleaning up dead shim" May 15 01:11:59.306303 env[1378]: time="2025-05-15T01:11:59.306285868Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:11:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6336 runtime=io.containerd.runc.v2\n" May 15 01:11:59.516151 kubelet[2379]: I0515 01:11:59.516126 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8a192bf-827c-4f9a-8033-eb76e1b29b9f-calico-apiserver-certs\") pod \"calico-apiserver-554b8879-bdlb2\" (UID: \"d8a192bf-827c-4f9a-8033-eb76e1b29b9f\") " pod="calico-apiserver/calico-apiserver-554b8879-bdlb2" May 15 01:11:59.516634 kubelet[2379]: I0515 01:11:59.516622 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghz4f\" (UniqueName: \"kubernetes.io/projected/d8a192bf-827c-4f9a-8033-eb76e1b29b9f-kube-api-access-ghz4f\") pod \"calico-apiserver-554b8879-bdlb2\" (UID: \"d8a192bf-827c-4f9a-8033-eb76e1b29b9f\") " pod="calico-apiserver/calico-apiserver-554b8879-bdlb2" May 15 01:11:59.564871 systemd[1]: run-containerd-runc-k8s.io-7e054a5c01bf65fcf4d801a27fe363fd883c70324936fa186462eea05da3b1fc-runc.go1got.mount: Deactivated successfully. May 15 01:12:00.101465 systemd-networkd[1141]: cali3a2bddc7410: Link DOWN May 15 01:12:00.101469 systemd-networkd[1141]: cali3a2bddc7410: Lost carrier May 15 01:12:00.208880 env[1378]: time="2025-05-15T01:12:00.208848315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b8879-bdlb2,Uid:d8a192bf-827c-4f9a-8033-eb76e1b29b9f,Namespace:calico-apiserver,Attempt:0,}" May 15 01:12:00.240000 audit[6402]: NETFILTER_CFG table=filter:138 family=2 entries=48 op=nft_register_rule pid=6402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:12:00.240000 audit[6402]: SYSCALL arch=c000003e syscall=46 success=yes exit=7756 a0=3 a1=7ffee8e77530 a2=0 a3=7ffee8e7751c items=0 ppid=5569 pid=6402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:00.240000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:12:00.240000 audit[6402]: NETFILTER_CFG table=filter:139 family=2 entries=2 op=nft_unregister_chain pid=6402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:12:00.240000 audit[6402]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffee8e77530 a2=0 a3=555ef43ae000 items=0 ppid=5569 pid=6402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:00.240000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:12:01.174933 sshd[6259]: Failed password for root from 186.56.11.17 port 49356 ssh2 May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:00.038 [INFO][6363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:00.055 [INFO][6363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" iface="eth0" netns="/var/run/netns/cni-517601d4-1f58-7c50-feb4-aca7b131d97c" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:00.057 [INFO][6363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" iface="eth0" netns="/var/run/netns/cni-517601d4-1f58-7c50-feb4-aca7b131d97c" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:00.079 [INFO][6363] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" after=23.415202ms iface="eth0" netns="/var/run/netns/cni-517601d4-1f58-7c50-feb4-aca7b131d97c" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:00.079 [INFO][6363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:00.079 [INFO][6363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.099 [INFO][6392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.102 [INFO][6392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.102 [INFO][6392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.209 [INFO][6392] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.209 [INFO][6392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.210 [INFO][6392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:01.216888 env[1378]: 2025-05-15 01:12:01.212 [INFO][6363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:01.216323 systemd[1]: run-netns-cni\x2d517601d4\x2d1f58\x2d7c50\x2dfeb4\x2daca7b131d97c.mount: Deactivated successfully. May 15 01:12:01.234509 env[1378]: time="2025-05-15T01:12:01.217013731Z" level=info msg="TearDown network for sandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" successfully" May 15 01:12:01.234509 env[1378]: time="2025-05-15T01:12:01.217036732Z" level=info msg="StopPodSandbox for \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" returns successfully" May 15 01:12:01.290383 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 01:12:01.309368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5218965fb6c: link becomes ready May 15 01:12:01.311390 systemd-networkd[1141]: cali5218965fb6c: Link UP May 15 01:12:01.311547 systemd-networkd[1141]: cali5218965fb6c: Gained carrier May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:00.334 [INFO][6403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0 calico-apiserver-554b8879- calico-apiserver d8a192bf-827c-4f9a-8033-eb76e1b29b9f 1137 0 2025-05-15 01:11:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:554b8879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-554b8879-bdlb2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5218965fb6c [] []}} ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:00.335 [INFO][6403] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.100 [INFO][6416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" HandleID="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Workload="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.164 [INFO][6416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" HandleID="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Workload="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-554b8879-bdlb2", "timestamp":"2025-05-15 01:12:01.100085684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.164 [INFO][6416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.210 [INFO][6416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.210 [INFO][6416] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.217 [INFO][6416] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.223 [INFO][6416] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.230 [INFO][6416] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.232 [INFO][6416] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.234 [INFO][6416] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.234 [INFO][6416] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.236 [INFO][6416] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55 May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.245 [INFO][6416] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.264 [INFO][6416] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.264 [INFO][6416] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" host="localhost" May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.264 [INFO][6416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:01.319470 env[1378]: 2025-05-15 01:12:01.264 [INFO][6416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" HandleID="k8s-pod-network.2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Workload="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.341547 env[1378]: 2025-05-15 01:12:01.266 [INFO][6403] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0", GenerateName:"calico-apiserver-554b8879-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8a192bf-827c-4f9a-8033-eb76e1b29b9f", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b8879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-554b8879-bdlb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5218965fb6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:12:01.341547 env[1378]: 2025-05-15 01:12:01.267 [INFO][6403] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.137/32] ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.341547 env[1378]: 2025-05-15 01:12:01.267 [INFO][6403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5218965fb6c ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.341547 env[1378]: 2025-05-15 01:12:01.299 [INFO][6403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.341547 env[1378]: 2025-05-15 01:12:01.299 [INFO][6403] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0", GenerateName:"calico-apiserver-554b8879-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8a192bf-827c-4f9a-8033-eb76e1b29b9f", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 1, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"554b8879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55", Pod:"calico-apiserver-554b8879-bdlb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5218965fb6c", MAC:"f2:d1:19:57:58:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 01:12:01.341547 env[1378]: 2025-05-15 01:12:01.317 [INFO][6403] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55" Namespace="calico-apiserver" Pod="calico-apiserver-554b8879-bdlb2" WorkloadEndpoint="localhost-k8s-calico--apiserver--554b8879--bdlb2-eth0" May 15 01:12:01.373000 audit[6435]: NETFILTER_CFG table=filter:140 family=2 entries=56 op=nft_register_chain pid=6435 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:12:01.373000 audit[6435]: SYSCALL arch=c000003e syscall=46 success=yes exit=27916 a0=3 a1=7ffe4c405020 a2=0 a3=7ffe4c40500c items=0 ppid=5569 pid=6435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:01.373000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:12:01.499805 env[1378]: time="2025-05-15T01:12:01.499142477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 01:12:01.499805 env[1378]: time="2025-05-15T01:12:01.499166916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 01:12:01.499805 env[1378]: time="2025-05-15T01:12:01.499173820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 01:12:01.499805 env[1378]: time="2025-05-15T01:12:01.499293807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55 pid=6447 runtime=io.containerd.runc.v2 May 15 01:12:01.540805 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 01:12:01.564196 env[1378]: time="2025-05-15T01:12:01.560952006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-554b8879-bdlb2,Uid:d8a192bf-827c-4f9a-8033-eb76e1b29b9f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55\"" May 15 01:12:01.604740 sshd[6259]: Received disconnect from 186.56.11.17 port 49356:11: Bye Bye [preauth] May 15 01:12:01.604740 sshd[6259]: Disconnected from authenticating user root 186.56.11.17 port 49356 [preauth] May 15 01:12:01.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.104:22-186.56.11.17:49356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:01.611934 systemd[1]: sshd@10-139.178.70.104:22-186.56.11.17:49356.service: Deactivated successfully. May 15 01:12:02.385420 kubelet[2379]: E0515 01:12:02.385393 2379 kubelet.go:2511] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.259s" May 15 01:12:02.390341 kubelet[2379]: I0515 01:12:02.390326 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:02.702992 systemd-networkd[1141]: cali5218965fb6c: Gained IPv6LL May 15 01:12:02.883712 env[1378]: time="2025-05-15T01:12:02.883613763Z" level=info msg="CreateContainer within sandbox \"2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 01:12:02.884217 kubelet[2379]: I0515 01:12:02.884197 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-calico-apiserver-certs\") pod \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\" (UID: \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\") " May 15 01:12:02.884307 kubelet[2379]: I0515 01:12:02.884274 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pfj2\" (UniqueName: \"kubernetes.io/projected/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-kube-api-access-8pfj2\") pod \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\" (UID: \"a465e0d0-b1f5-4d68-a5ea-fce28821f59f\") " May 15 01:12:02.975689 env[1378]: time="2025-05-15T01:12:02.975485865Z" level=info msg="CreateContainer within sandbox \"2191f1afd8f226013fc9b0dc0bb7a94e13e6f6cc2277a540174aedbff8ab5d55\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e40fed72c0070913ea2f1103673f5d3e646bdba915d47cef18e1b6aac9db6cac\"" May 15 01:12:03.110023 systemd[1]: var-lib-kubelet-pods-a465e0d0\x2db1f5\x2d4d68\x2da5ea\x2dfce28821f59f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8pfj2.mount: Deactivated successfully. May 15 01:12:03.112681 systemd[1]: var-lib-kubelet-pods-a465e0d0\x2db1f5\x2d4d68\x2da5ea\x2dfce28821f59f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 01:12:03.194986 kubelet[2379]: I0515 01:12:03.193452 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a465e0d0-b1f5-4d68-a5ea-fce28821f59f" (UID: "a465e0d0-b1f5-4d68-a5ea-fce28821f59f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 01:12:03.310816 kubelet[2379]: I0515 01:12:03.183173 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-kube-api-access-8pfj2" (OuterVolumeSpecName: "kube-api-access-8pfj2") pod "a465e0d0-b1f5-4d68-a5ea-fce28821f59f" (UID: "a465e0d0-b1f5-4d68-a5ea-fce28821f59f"). InnerVolumeSpecName "kube-api-access-8pfj2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 01:12:03.359644 kubelet[2379]: I0515 01:12:03.359621 2379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8pfj2\" (UniqueName: \"kubernetes.io/projected/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-kube-api-access-8pfj2\") on node \"localhost\" DevicePath \"\"" May 15 01:12:03.359644 kubelet[2379]: I0515 01:12:03.359646 2379 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a465e0d0-b1f5-4d68-a5ea-fce28821f59f-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 15 01:12:03.365348 env[1378]: time="2025-05-15T01:12:03.365174980Z" level=info msg="StartContainer for \"e40fed72c0070913ea2f1103673f5d3e646bdba915d47cef18e1b6aac9db6cac\"" May 15 01:12:03.435954 env[1378]: time="2025-05-15T01:12:03.435918544Z" level=info msg="StartContainer for \"e40fed72c0070913ea2f1103673f5d3e646bdba915d47cef18e1b6aac9db6cac\" returns successfully" May 15 01:12:04.048000 audit[6538]: NETFILTER_CFG table=filter:141 family=2 entries=8 op=nft_register_rule pid=6538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.071913 kernel: kauditd_printk_skb: 15 callbacks suppressed May 15 01:12:04.071993 kernel: audit: type=1325 audit(1747271524.048:548): table=filter:141 family=2 entries=8 op=nft_register_rule pid=6538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.074107 kernel: audit: type=1300 audit(1747271524.048:548): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc56875d10 a2=0 a3=7ffc56875cfc items=0 ppid=2516 pid=6538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:04.077030 kernel: audit: type=1327 audit(1747271524.048:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:04.079868 kernel: audit: type=1325 audit(1747271524.060:549): table=nat:142 family=2 entries=36 op=nft_register_rule pid=6538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.079896 kernel: audit: type=1300 audit(1747271524.060:549): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc56875d10 a2=0 a3=7ffc56875cfc items=0 ppid=2516 pid=6538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:04.079911 kernel: audit: type=1327 audit(1747271524.060:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:04.048000 audit[6538]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc56875d10 a2=0 a3=7ffc56875cfc items=0 ppid=2516 pid=6538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:04.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:04.060000 audit[6538]: NETFILTER_CFG table=nat:142 family=2 entries=36 op=nft_register_rule pid=6538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.060000 audit[6538]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc56875d10 a2=0 a3=7ffc56875cfc items=0 ppid=2516 pid=6538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:04.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:04.221263 sshd[6266]: pam_unix(sshd:session): session closed for user core May 15 01:12:04.248000 audit[6266]: USER_END pid=6266 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:04.254509 kernel: audit: type=1106 audit(1747271524.248:550): pid=6266 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:04.253000 audit[6266]: CRED_DISP pid=6266 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:04.258300 kernel: audit: type=1104 audit(1747271524.253:551): pid=6266 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:04.258626 systemd[1]: sshd@11-139.178.70.104:22-147.75.109.163:50274.service: Deactivated successfully. May 15 01:12:04.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.104:22-147.75.109.163:50274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:04.264277 kernel: audit: type=1131 audit(1747271524.257:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.104:22-147.75.109.163:50274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:04.264419 systemd[1]: session-10.scope: Deactivated successfully. May 15 01:12:04.269279 systemd-logind[1361]: Session 10 logged out. Waiting for processes to exit. May 15 01:12:04.345358 systemd-logind[1361]: Removed session 10. May 15 01:12:04.676000 audit[6543]: NETFILTER_CFG table=filter:143 family=2 entries=8 op=nft_register_rule pid=6543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.676000 audit[6543]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe44654970 a2=0 a3=7ffe4465495c items=0 ppid=2516 pid=6543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:04.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:04.680260 kernel: audit: type=1325 audit(1747271524.676:553): table=filter:143 family=2 entries=8 op=nft_register_rule pid=6543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.681000 audit[6543]: NETFILTER_CFG table=nat:144 family=2 entries=36 op=nft_register_rule pid=6543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:04.681000 audit[6543]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe44654970 a2=0 a3=7ffe4465495c items=0 ppid=2516 pid=6543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:04.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:04.967717 kubelet[2379]: I0515 01:12:04.967643 2379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a465e0d0-b1f5-4d68-a5ea-fce28821f59f" path="/var/lib/kubelet/pods/a465e0d0-b1f5-4d68-a5ea-fce28821f59f/volumes" May 15 01:12:05.685804 kubelet[2379]: I0515 01:12:05.668185 2379 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-554b8879-bdlb2" podStartSLOduration=7.66258594 podStartE2EDuration="7.66258594s" podCreationTimestamp="2025-05-15 01:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 01:12:04.964353088 +0000 UTC m=+88.436236184" watchObservedRunningTime="2025-05-15 01:12:05.66258594 +0000 UTC m=+89.134469027" May 15 01:12:06.744691 env[1378]: time="2025-05-15T01:12:06.744657874Z" level=info msg="StopContainer for \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\" with timeout 30 (s)" May 15 01:12:06.750986 env[1378]: time="2025-05-15T01:12:06.745096486Z" level=info msg="Stop container \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\" with signal terminated" May 15 01:12:06.850330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46-rootfs.mount: Deactivated successfully. May 15 01:12:06.872000 audit[6574]: NETFILTER_CFG table=filter:145 family=2 entries=8 op=nft_register_rule pid=6574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:06.872000 audit[6574]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7d75af30 a2=0 a3=7ffc7d75af1c items=0 ppid=2516 pid=6574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:06.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:06.931000 audit[6574]: NETFILTER_CFG table=nat:146 family=2 entries=40 op=nft_register_chain pid=6574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:06.931000 audit[6574]: SYSCALL arch=c000003e syscall=46 success=yes exit=13124 a0=3 a1=7ffc7d75af30 a2=0 a3=7ffc7d75af1c items=0 ppid=2516 pid=6574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:06.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:06.940000 audit[6590]: NETFILTER_CFG table=filter:147 family=2 entries=8 op=nft_register_rule pid=6590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:06.940000 audit[6590]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdb7f49990 a2=0 a3=7ffdb7f4997c items=0 ppid=2516 pid=6590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:06.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:06.944000 audit[6590]: NETFILTER_CFG table=nat:148 family=2 entries=40 op=nft_unregister_chain pid=6590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:06.944000 audit[6590]: SYSCALL arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffdb7f49990 a2=0 a3=7ffdb7f4997c items=0 ppid=2516 pid=6590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:06.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:07.026185 env[1378]: time="2025-05-15T01:12:06.991536369Z" level=info msg="shim disconnected" id=624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46 May 15 01:12:07.026185 env[1378]: time="2025-05-15T01:12:06.991569310Z" level=warning msg="cleaning up after shim disconnected" id=624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46 namespace=k8s.io May 15 01:12:07.026185 env[1378]: time="2025-05-15T01:12:06.991575894Z" level=info msg="cleaning up dead shim" May 15 01:12:07.026185 env[1378]: time="2025-05-15T01:12:06.998410228Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:12:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6591 runtime=io.containerd.runc.v2\n" May 15 01:12:07.032334 env[1378]: time="2025-05-15T01:12:07.026483145Z" level=info msg="StopContainer for \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\" returns successfully" May 15 01:12:07.100340 env[1378]: time="2025-05-15T01:12:07.100303557Z" level=info msg="StopPodSandbox for \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\"" May 15 01:12:07.100512 env[1378]: time="2025-05-15T01:12:07.100495165Z" level=info msg="Container to stop \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 01:12:07.102738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee-shm.mount: Deactivated successfully. May 15 01:12:07.154916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee-rootfs.mount: Deactivated successfully. May 15 01:12:07.159860 env[1378]: time="2025-05-15T01:12:07.159738206Z" level=info msg="shim disconnected" id=aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee May 15 01:12:07.160010 env[1378]: time="2025-05-15T01:12:07.159999837Z" level=warning msg="cleaning up after shim disconnected" id=aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee namespace=k8s.io May 15 01:12:07.160076 env[1378]: time="2025-05-15T01:12:07.160066532Z" level=info msg="cleaning up dead shim" May 15 01:12:07.170984 env[1378]: time="2025-05-15T01:12:07.170950182Z" level=warning msg="cleanup warnings time=\"2025-05-15T01:12:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6624 runtime=io.containerd.runc.v2\n" May 15 01:12:07.681435 kubelet[2379]: I0515 01:12:07.681080 2379 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:08.758283 systemd-networkd[1141]: calie82da44cbca: Link DOWN May 15 01:12:08.758288 systemd-networkd[1141]: calie82da44cbca: Lost carrier May 15 01:12:09.035000 audit[6666]: NETFILTER_CFG table=filter:149 family=2 entries=44 op=nft_register_rule pid=6666 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:12:09.035000 audit[6666]: SYSCALL arch=c000003e syscall=46 success=yes exit=6552 a0=3 a1=7fff4c773d40 a2=0 a3=7fff4c773d2c items=0 ppid=5569 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:09.035000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:12:09.035000 audit[6666]: NETFILTER_CFG table=filter:150 family=2 entries=2 op=nft_unregister_chain pid=6666 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 01:12:09.035000 audit[6666]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff4c773d40 a2=0 a3=561d9864d000 items=0 ppid=5569 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:09.035000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 01:12:09.389883 systemd[1]: Started sshd@12-139.178.70.104:22-147.75.109.163:50308.service. May 15 01:12:09.425283 kernel: kauditd_printk_skb: 23 callbacks suppressed May 15 01:12:09.437214 kernel: audit: type=1130 audit(1747271529.388:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.104:22-147.75.109.163:50308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:09.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.104:22-147.75.109.163:50308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:10.042000 audit[6669]: USER_ACCT pid=6669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.146388 kernel: audit: type=1101 audit(1747271530.042:562): pid=6669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.146453 kernel: audit: type=1103 audit(1747271530.052:563): pid=6669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.146475 kernel: audit: type=1006 audit(1747271530.055:564): pid=6669 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 15 01:12:10.146498 kernel: audit: type=1300 audit(1747271530.055:564): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe344eae0 a2=3 a3=0 items=0 ppid=1 pid=6669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:10.146521 kernel: audit: type=1327 audit(1747271530.055:564): proctitle=737368643A20636F7265205B707269765D May 15 01:12:10.052000 audit[6669]: CRED_ACQ pid=6669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.055000 audit[6669]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe344eae0 a2=3 a3=0 items=0 ppid=1 pid=6669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:10.055000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:10.261601 sshd[6669]: Accepted publickey for core from 147.75.109.163 port 50308 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:10.196635 sshd[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:10.350138 systemd[1]: Started session-11.scope. May 15 01:12:10.360563 kernel: audit: type=1105 audit(1747271530.352:565): pid=6669 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.360608 kernel: audit: type=1103 audit(1747271530.352:566): pid=6673 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.352000 audit[6669]: USER_START pid=6669 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.352000 audit[6673]: CRED_ACQ pid=6673 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:10.350562 systemd-logind[1361]: New session 11 of user core. May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:08.705 [INFO][6650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:08.730 [INFO][6650] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" iface="eth0" netns="/var/run/netns/cni-0fd0524b-9459-77e5-a232-599dd3f20e65" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:08.737 [INFO][6650] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" iface="eth0" netns="/var/run/netns/cni-0fd0524b-9459-77e5-a232-599dd3f20e65" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:08.757 [INFO][6650] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" after=26.586664ms iface="eth0" netns="/var/run/netns/cni-0fd0524b-9459-77e5-a232-599dd3f20e65" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:08.757 [INFO][6650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:08.757 [INFO][6650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.360 [INFO][6658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.391 [INFO][6658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.391 [INFO][6658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.739 [INFO][6658] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.739 [INFO][6658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.741 [INFO][6658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:10.789742 env[1378]: 2025-05-15 01:12:10.773 [INFO][6650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:10.791827 env[1378]: time="2025-05-15T01:12:10.791800739Z" level=info msg="TearDown network for sandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" successfully" May 15 01:12:10.791866 env[1378]: time="2025-05-15T01:12:10.791825554Z" level=info msg="StopPodSandbox for \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" returns successfully" May 15 01:12:10.800910 systemd[1]: run-netns-cni\x2d0fd0524b\x2d9459\x2d77e5\x2da232\x2d599dd3f20e65.mount: Deactivated successfully. May 15 01:12:12.048000 audit[6684]: NETFILTER_CFG table=filter:151 family=2 entries=8 op=nft_register_rule pid=6684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:12.052476 kernel: audit: type=1325 audit(1747271532.048:567): table=filter:151 family=2 entries=8 op=nft_register_rule pid=6684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:12.061218 kernel: audit: type=1300 audit(1747271532.048:567): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff853be650 a2=0 a3=7fff853be63c items=0 ppid=2516 pid=6684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:12.048000 audit[6684]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff853be650 a2=0 a3=7fff853be63c items=0 ppid=2516 pid=6684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:12.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:12.064000 audit[6684]: NETFILTER_CFG table=nat:152 family=2 entries=36 op=nft_register_rule pid=6684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:12.064000 audit[6684]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff853be650 a2=0 a3=7fff853be63c items=0 ppid=2516 pid=6684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:12.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:12.208341 kubelet[2379]: I0515 01:12:12.208315 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab75af42-a771-4175-8a6f-81471c06a1c4-calico-apiserver-certs\") pod \"ab75af42-a771-4175-8a6f-81471c06a1c4\" (UID: \"ab75af42-a771-4175-8a6f-81471c06a1c4\") " May 15 01:12:12.209792 kubelet[2379]: I0515 01:12:12.208372 2379 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8qg8\" (UniqueName: \"kubernetes.io/projected/ab75af42-a771-4175-8a6f-81471c06a1c4-kube-api-access-m8qg8\") pod \"ab75af42-a771-4175-8a6f-81471c06a1c4\" (UID: \"ab75af42-a771-4175-8a6f-81471c06a1c4\") " May 15 01:12:12.339952 systemd[1]: var-lib-kubelet-pods-ab75af42\x2da771\x2d4175\x2d8a6f\x2d81471c06a1c4-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 01:12:12.344134 systemd[1]: var-lib-kubelet-pods-ab75af42\x2da771\x2d4175\x2d8a6f\x2d81471c06a1c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm8qg8.mount: Deactivated successfully. May 15 01:12:12.420785 kubelet[2379]: I0515 01:12:12.414928 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab75af42-a771-4175-8a6f-81471c06a1c4-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ab75af42-a771-4175-8a6f-81471c06a1c4" (UID: "ab75af42-a771-4175-8a6f-81471c06a1c4"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 01:12:12.421929 kubelet[2379]: I0515 01:12:12.421302 2379 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab75af42-a771-4175-8a6f-81471c06a1c4-kube-api-access-m8qg8" (OuterVolumeSpecName: "kube-api-access-m8qg8") pod "ab75af42-a771-4175-8a6f-81471c06a1c4" (UID: "ab75af42-a771-4175-8a6f-81471c06a1c4"). InnerVolumeSpecName "kube-api-access-m8qg8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 01:12:12.423143 kubelet[2379]: I0515 01:12:12.423127 2379 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab75af42-a771-4175-8a6f-81471c06a1c4-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 15 01:12:12.524901 kubelet[2379]: I0515 01:12:12.524881 2379 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-m8qg8\" (UniqueName: \"kubernetes.io/projected/ab75af42-a771-4175-8a6f-81471c06a1c4-kube-api-access-m8qg8\") on node \"localhost\" DevicePath \"\"" May 15 01:12:13.441855 systemd[1]: Started sshd@13-139.178.70.104:22-147.75.109.163:50318.service. May 15 01:12:13.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.104:22-147.75.109.163:50318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:13.450354 sshd[6669]: pam_unix(sshd:session): session closed for user core May 15 01:12:13.464000 audit[6669]: USER_END pid=6669 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.469000 audit[6669]: CRED_DISP pid=6669 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.506447 systemd[1]: sshd@12-139.178.70.104:22-147.75.109.163:50308.service: Deactivated successfully. May 15 01:12:13.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.104:22-147.75.109.163:50308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:13.507347 systemd[1]: session-11.scope: Deactivated successfully. May 15 01:12:13.507369 systemd-logind[1361]: Session 11 logged out. Waiting for processes to exit. May 15 01:12:13.509226 systemd-logind[1361]: Removed session 11. May 15 01:12:13.544000 audit[6712]: USER_ACCT pid=6712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.546317 sshd[6712]: Accepted publickey for core from 147.75.109.163 port 50318 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:13.545000 audit[6712]: CRED_ACQ pid=6712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.545000 audit[6712]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff5deb140 a2=3 a3=0 items=0 ppid=1 pid=6712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:13.545000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:13.547424 sshd[6712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:13.550998 systemd[1]: Started session-12.scope. May 15 01:12:13.551253 systemd-logind[1361]: New session 12 of user core. May 15 01:12:13.553000 audit[6712]: USER_START pid=6712 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.554000 audit[6717]: CRED_ACQ pid=6717 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.104:22-147.75.109.163:50322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:13.913803 systemd[1]: Started sshd@14-139.178.70.104:22-147.75.109.163:50322.service. May 15 01:12:13.924133 sshd[6712]: pam_unix(sshd:session): session closed for user core May 15 01:12:13.924000 audit[6712]: USER_END pid=6712 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.927000 audit[6712]: CRED_DISP pid=6712 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:13.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.104:22-147.75.109.163:50318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:13.929667 systemd[1]: sshd@13-139.178.70.104:22-147.75.109.163:50318.service: Deactivated successfully. May 15 01:12:13.930287 systemd[1]: session-12.scope: Deactivated successfully. May 15 01:12:13.931107 systemd-logind[1361]: Session 12 logged out. Waiting for processes to exit. May 15 01:12:13.931716 systemd-logind[1361]: Removed session 12. May 15 01:12:14.056202 sshd[6725]: Accepted publickey for core from 147.75.109.163 port 50322 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:14.054000 audit[6725]: USER_ACCT pid=6725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:14.055000 audit[6725]: CRED_ACQ pid=6725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:14.055000 audit[6725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd64f0e150 a2=3 a3=0 items=0 ppid=1 pid=6725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:14.055000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:14.058625 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:14.062118 systemd[1]: Started session-13.scope. May 15 01:12:14.062463 systemd-logind[1361]: New session 13 of user core. May 15 01:12:14.065000 audit[6725]: USER_START pid=6725 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:14.066000 audit[6731]: CRED_ACQ pid=6731 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:14.221744 sshd[6725]: pam_unix(sshd:session): session closed for user core May 15 01:12:14.226000 audit[6725]: USER_END pid=6725 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:14.226000 audit[6725]: CRED_DISP pid=6725 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:14.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.104:22-147.75.109.163:50322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:14.233279 systemd[1]: sshd@14-139.178.70.104:22-147.75.109.163:50322.service: Deactivated successfully. May 15 01:12:14.233805 systemd[1]: session-13.scope: Deactivated successfully. May 15 01:12:14.234355 systemd-logind[1361]: Session 13 logged out. Waiting for processes to exit. May 15 01:12:14.235110 systemd-logind[1361]: Removed session 13. May 15 01:12:14.859300 kubelet[2379]: I0515 01:12:14.851478 2379 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab75af42-a771-4175-8a6f-81471c06a1c4" path="/var/lib/kubelet/pods/ab75af42-a771-4175-8a6f-81471c06a1c4/volumes" May 15 01:12:19.220165 systemd[1]: Started sshd@15-139.178.70.104:22-147.75.109.163:49926.service. May 15 01:12:19.225059 kernel: kauditd_printk_skb: 29 callbacks suppressed May 15 01:12:19.225118 kernel: audit: type=1130 audit(1747271539.219:590): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.104:22-147.75.109.163:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:19.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.104:22-147.75.109.163:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:19.337000 audit[6755]: USER_ACCT pid=6755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.342423 sshd[6755]: Accepted publickey for core from 147.75.109.163 port 49926 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:19.343259 kernel: audit: type=1101 audit(1747271539.337:591): pid=6755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.342000 audit[6755]: CRED_ACQ pid=6755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.349073 kernel: audit: type=1103 audit(1747271539.342:592): pid=6755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.349123 kernel: audit: type=1006 audit(1747271539.346:593): pid=6755 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 15 01:12:19.352403 kernel: audit: type=1300 audit(1747271539.346:593): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff52f17640 a2=3 a3=0 items=0 ppid=1 pid=6755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:19.346000 audit[6755]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff52f17640 a2=3 a3=0 items=0 ppid=1 pid=6755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:19.346000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:19.353575 kernel: audit: type=1327 audit(1747271539.346:593): proctitle=737368643A20636F7265205B707269765D May 15 01:12:19.353928 sshd[6755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:19.359157 systemd[1]: Started session-14.scope. May 15 01:12:19.359168 systemd-logind[1361]: New session 14 of user core. May 15 01:12:19.363000 audit[6755]: USER_START pid=6755 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.368000 audit[6758]: CRED_ACQ pid=6758 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.371958 kernel: audit: type=1105 audit(1747271539.363:594): pid=6755 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.373279 kernel: audit: type=1103 audit(1747271539.368:595): pid=6758 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.555687 sshd[6755]: pam_unix(sshd:session): session closed for user core May 15 01:12:19.555000 audit[6755]: USER_END pid=6755 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.557000 audit[6755]: CRED_DISP pid=6755 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.563774 kernel: audit: type=1106 audit(1747271539.555:596): pid=6755 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.564697 kernel: audit: type=1104 audit(1747271539.557:597): pid=6755 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:19.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.104:22-147.75.109.163:49926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:19.563975 systemd[1]: sshd@15-139.178.70.104:22-147.75.109.163:49926.service: Deactivated successfully. May 15 01:12:19.564818 systemd[1]: session-14.scope: Deactivated successfully. May 15 01:12:19.566120 systemd-logind[1361]: Session 14 logged out. Waiting for processes to exit. May 15 01:12:19.570264 systemd-logind[1361]: Removed session 14. May 15 01:12:24.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.104:22-147.75.109.163:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:24.559442 systemd[1]: Started sshd@16-139.178.70.104:22-147.75.109.163:49930.service. May 15 01:12:24.564600 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 01:12:24.564638 kernel: audit: type=1130 audit(1747271544.558:599): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.104:22-147.75.109.163:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:24.623000 audit[6770]: USER_ACCT pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.629067 sshd[6770]: Accepted publickey for core from 147.75.109.163 port 49930 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:24.630470 kernel: audit: type=1101 audit(1747271544.623:600): pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.629000 audit[6770]: CRED_ACQ pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.634247 kernel: audit: type=1103 audit(1747271544.629:601): pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.634312 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:24.632000 audit[6770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2b1bfcc0 a2=3 a3=0 items=0 ppid=1 pid=6770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:24.640017 kernel: audit: type=1006 audit(1747271544.632:602): pid=6770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 15 01:12:24.640042 kernel: audit: type=1300 audit(1747271544.632:602): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2b1bfcc0 a2=3 a3=0 items=0 ppid=1 pid=6770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:24.640057 kernel: audit: type=1327 audit(1747271544.632:602): proctitle=737368643A20636F7265205B707269765D May 15 01:12:24.632000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:24.642835 systemd-logind[1361]: New session 15 of user core. May 15 01:12:24.643370 systemd[1]: Started session-15.scope. May 15 01:12:24.645000 audit[6770]: USER_START pid=6770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.650654 kernel: audit: type=1105 audit(1747271544.645:603): pid=6770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.650695 kernel: audit: type=1103 audit(1747271544.649:604): pid=6773 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.649000 audit[6773]: CRED_ACQ pid=6773 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.758608 sshd[6770]: pam_unix(sshd:session): session closed for user core May 15 01:12:24.758000 audit[6770]: USER_END pid=6770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.758000 audit[6770]: CRED_DISP pid=6770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.765898 kernel: audit: type=1106 audit(1747271544.758:605): pid=6770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.765932 kernel: audit: type=1104 audit(1747271544.758:606): pid=6770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:24.766069 systemd[1]: sshd@16-139.178.70.104:22-147.75.109.163:49930.service: Deactivated successfully. May 15 01:12:24.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.104:22-147.75.109.163:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:24.766793 systemd[1]: session-15.scope: Deactivated successfully. May 15 01:12:24.766812 systemd-logind[1361]: Session 15 logged out. Waiting for processes to exit. May 15 01:12:24.767733 systemd-logind[1361]: Removed session 15. May 15 01:12:29.348557 systemd[1]: run-containerd-runc-k8s.io-7e054a5c01bf65fcf4d801a27fe363fd883c70324936fa186462eea05da3b1fc-runc.ZjP2dE.mount: Deactivated successfully. May 15 01:12:29.760978 systemd[1]: Started sshd@17-139.178.70.104:22-147.75.109.163:54882.service. May 15 01:12:29.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.104:22-147.75.109.163:54882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:29.761792 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 01:12:29.761844 kernel: audit: type=1130 audit(1747271549.759:608): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.104:22-147.75.109.163:54882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:29.951403 sshd[6804]: Accepted publickey for core from 147.75.109.163 port 54882 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:29.950000 audit[6804]: USER_ACCT pid=6804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.958576 kernel: audit: type=1101 audit(1747271549.950:609): pid=6804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.958615 kernel: audit: type=1103 audit(1747271549.953:610): pid=6804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.960610 kernel: audit: type=1006 audit(1747271549.953:611): pid=6804 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 15 01:12:29.953000 audit[6804]: CRED_ACQ pid=6804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.953000 audit[6804]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2e3a3120 a2=3 a3=0 items=0 ppid=1 pid=6804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:29.964149 kernel: audit: type=1300 audit(1747271549.953:611): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2e3a3120 a2=3 a3=0 items=0 ppid=1 pid=6804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:29.964255 kernel: audit: type=1327 audit(1747271549.953:611): proctitle=737368643A20636F7265205B707269765D May 15 01:12:29.953000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:29.965845 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:29.971576 systemd-logind[1361]: New session 16 of user core. May 15 01:12:29.971919 systemd[1]: Started session-16.scope. May 15 01:12:29.974000 audit[6804]: USER_START pid=6804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.980742 kernel: audit: type=1105 audit(1747271549.974:612): pid=6804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.980816 kernel: audit: type=1103 audit(1747271549.979:613): pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:29.979000 audit[6808]: CRED_ACQ pid=6808 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:30.201431 sshd[6804]: pam_unix(sshd:session): session closed for user core May 15 01:12:30.200000 audit[6804]: USER_END pid=6804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:30.204000 audit[6804]: CRED_DISP pid=6804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:30.209048 kernel: audit: type=1106 audit(1747271550.200:614): pid=6804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:30.209092 kernel: audit: type=1104 audit(1747271550.204:615): pid=6804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:30.209247 systemd[1]: sshd@17-139.178.70.104:22-147.75.109.163:54882.service: Deactivated successfully. May 15 01:12:30.209817 systemd[1]: session-16.scope: Deactivated successfully. May 15 01:12:30.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.104:22-147.75.109.163:54882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:30.210493 systemd-logind[1361]: Session 16 logged out. Waiting for processes to exit. May 15 01:12:30.211347 systemd-logind[1361]: Removed session 16. May 15 01:12:35.203989 systemd[1]: Started sshd@18-139.178.70.104:22-147.75.109.163:54898.service. May 15 01:12:35.204815 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 01:12:35.204843 kernel: audit: type=1130 audit(1747271555.202:617): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.104:22-147.75.109.163:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:35.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.104:22-147.75.109.163:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:35.268435 sshd[6818]: Accepted publickey for core from 147.75.109.163 port 54898 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:35.267000 audit[6818]: USER_ACCT pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.272119 sshd[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:35.272673 kernel: audit: type=1101 audit(1747271555.267:618): pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.272701 kernel: audit: type=1103 audit(1747271555.271:619): pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.277611 kernel: audit: type=1006 audit(1747271555.271:620): pid=6818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 15 01:12:35.277644 kernel: audit: type=1300 audit(1747271555.271:620): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff023c0290 a2=3 a3=0 items=0 ppid=1 pid=6818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:35.271000 audit[6818]: CRED_ACQ pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.271000 audit[6818]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff023c0290 a2=3 a3=0 items=0 ppid=1 pid=6818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:35.281488 kernel: audit: type=1327 audit(1747271555.271:620): proctitle=737368643A20636F7265205B707269765D May 15 01:12:35.271000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:35.283903 systemd[1]: Started session-17.scope. May 15 01:12:35.284564 systemd-logind[1361]: New session 17 of user core. May 15 01:12:35.286000 audit[6818]: USER_START pid=6818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.290000 audit[6821]: CRED_ACQ pid=6821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.294952 kernel: audit: type=1105 audit(1747271555.286:621): pid=6818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.294992 kernel: audit: type=1103 audit(1747271555.290:622): pid=6821 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.615214 sshd[6818]: pam_unix(sshd:session): session closed for user core May 15 01:12:35.616000 audit[6818]: USER_END pid=6818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.622258 kernel: audit: type=1106 audit(1747271555.616:623): pid=6818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.622403 systemd[1]: Started sshd@19-139.178.70.104:22-147.75.109.163:54912.service. May 15 01:12:35.623176 systemd[1]: sshd@18-139.178.70.104:22-147.75.109.163:54898.service: Deactivated successfully. May 15 01:12:35.623976 systemd[1]: session-17.scope: Deactivated successfully. May 15 01:12:35.628710 kernel: audit: type=1104 audit(1747271555.617:624): pid=6818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.617000 audit[6818]: CRED_DISP pid=6818 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.626658 systemd-logind[1361]: Session 17 logged out. Waiting for processes to exit. May 15 01:12:35.628493 systemd-logind[1361]: Removed session 17. May 15 01:12:35.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.104:22-147.75.109.163:54912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:35.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.104:22-147.75.109.163:54898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:35.667000 audit[6851]: USER_ACCT pid=6851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.668512 sshd[6851]: Accepted publickey for core from 147.75.109.163 port 54912 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:35.668000 audit[6851]: CRED_ACQ pid=6851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.668000 audit[6851]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe04e83660 a2=3 a3=0 items=0 ppid=1 pid=6851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:35.668000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:35.669666 sshd[6851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:35.672584 systemd[1]: Started session-18.scope. May 15 01:12:35.672832 systemd-logind[1361]: New session 18 of user core. May 15 01:12:35.674000 audit[6851]: USER_START pid=6851 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:35.675000 audit[6856]: CRED_ACQ pid=6856 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:36.411766 systemd[1]: Started sshd@20-139.178.70.104:22-147.75.109.163:54926.service. May 15 01:12:36.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.104:22-147.75.109.163:54926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:36.416475 sshd[6851]: pam_unix(sshd:session): session closed for user core May 15 01:12:36.419000 audit[6851]: USER_END pid=6851 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:36.421000 audit[6851]: CRED_DISP pid=6851 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:36.423380 systemd[1]: sshd@19-139.178.70.104:22-147.75.109.163:54912.service: Deactivated successfully. May 15 01:12:36.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.104:22-147.75.109.163:54912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:36.424148 systemd[1]: session-18.scope: Deactivated successfully. May 15 01:12:36.424364 systemd-logind[1361]: Session 18 logged out. Waiting for processes to exit. May 15 01:12:36.424843 systemd-logind[1361]: Removed session 18. May 15 01:12:36.476000 audit[6863]: USER_ACCT pid=6863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:36.478349 sshd[6863]: Accepted publickey for core from 147.75.109.163 port 54926 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:36.477000 audit[6863]: CRED_ACQ pid=6863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:36.479000 audit[6863]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4848cb10 a2=3 a3=0 items=0 ppid=1 pid=6863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:36.479000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:36.480849 sshd[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:36.484843 systemd[1]: Started session-19.scope. May 15 01:12:36.484961 systemd-logind[1361]: New session 19 of user core. May 15 01:12:36.486000 audit[6863]: USER_START pid=6863 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:36.487000 audit[6868]: CRED_ACQ pid=6868 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:38.460622 sshd[6863]: pam_unix(sshd:session): session closed for user core May 15 01:12:38.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.104:22-147.75.109.163:52444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:38.504000 audit[6863]: USER_END pid=6863 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:38.507000 audit[6863]: CRED_DISP pid=6863 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:38.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.104:22-147.75.109.163:54926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:38.503870 systemd[1]: Started sshd@21-139.178.70.104:22-147.75.109.163:52444.service. May 15 01:12:38.509788 systemd[1]: sshd@20-139.178.70.104:22-147.75.109.163:54926.service: Deactivated successfully. May 15 01:12:38.516508 systemd[1]: session-19.scope: Deactivated successfully. May 15 01:12:38.516725 systemd-logind[1361]: Session 19 logged out. Waiting for processes to exit. May 15 01:12:38.517299 systemd-logind[1361]: Removed session 19. May 15 01:12:38.574000 audit[6886]: NETFILTER_CFG table=filter:153 family=2 entries=20 op=nft_register_rule pid=6886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:38.574000 audit[6886]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc542e9da0 a2=0 a3=7ffc542e9d8c items=0 ppid=2516 pid=6886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:38.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:38.580000 audit[6886]: NETFILTER_CFG table=nat:154 family=2 entries=22 op=nft_register_rule pid=6886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:38.580000 audit[6886]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc542e9da0 a2=0 a3=0 items=0 ppid=2516 pid=6886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:38.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:38.594000 audit[6889]: NETFILTER_CFG table=filter:155 family=2 entries=32 op=nft_register_rule pid=6889 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:38.594000 audit[6889]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd6d516650 a2=0 a3=7ffd6d51663c items=0 ppid=2516 pid=6889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:38.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:38.603000 audit[6889]: NETFILTER_CFG table=nat:156 family=2 entries=22 op=nft_register_rule pid=6889 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:38.603000 audit[6889]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd6d516650 a2=0 a3=0 items=0 ppid=2516 pid=6889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:38.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:38.660000 audit[6883]: USER_ACCT pid=6883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:38.663127 sshd[6883]: Accepted publickey for core from 147.75.109.163 port 52444 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:38.662000 audit[6883]: CRED_ACQ pid=6883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:38.662000 audit[6883]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc092f130 a2=3 a3=0 items=0 ppid=1 pid=6883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:38.662000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:38.665269 sshd[6883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:38.673165 systemd-logind[1361]: New session 20 of user core. May 15 01:12:38.674141 systemd[1]: Started session-20.scope. May 15 01:12:38.686000 audit[6883]: USER_START pid=6883 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:38.690000 audit[6891]: CRED_ACQ pid=6891 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:39.426880 kubelet[2379]: I0515 01:12:39.426850 2379 scope.go:117] "RemoveContainer" containerID="624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46" May 15 01:12:39.460418 env[1378]: time="2025-05-15T01:12:39.460364143Z" level=info msg="RemoveContainer for \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\"" May 15 01:12:39.476048 env[1378]: time="2025-05-15T01:12:39.475966186Z" level=info msg="RemoveContainer for \"624f9a71f3cded373beb6239d547fd3ebe0374ef6ef89c3dbaa92ea603f10d46\" returns successfully" May 15 01:12:39.476286 kubelet[2379]: I0515 01:12:39.476274 2379 scope.go:117] "RemoveContainer" containerID="7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2" May 15 01:12:39.478615 env[1378]: time="2025-05-15T01:12:39.478558130Z" level=info msg="RemoveContainer for \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\"" May 15 01:12:39.484777 env[1378]: time="2025-05-15T01:12:39.484753105Z" level=info msg="RemoveContainer for \"7944af6cf2fe6102099421ceecc0256e3824edf38b059f09a0fcdc570ab49ed2\" returns successfully" May 15 01:12:39.518683 env[1378]: time="2025-05-15T01:12:39.518358398Z" level=info msg="StopPodSandbox for \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\"" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:39.959 [WARNING][6908] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:39.962 [INFO][6908] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:39.962 [INFO][6908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" iface="eth0" netns="" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:39.962 [INFO][6908] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:39.962 [INFO][6908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.033 [INFO][6915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.034 [INFO][6915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.035 [INFO][6915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.052 [WARNING][6915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.052 [INFO][6915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.054 [INFO][6915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:41.061718 env[1378]: 2025-05-15 01:12:41.057 [INFO][6908] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:41.061718 env[1378]: time="2025-05-15T01:12:41.059844706Z" level=info msg="TearDown network for sandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" successfully" May 15 01:12:41.061718 env[1378]: time="2025-05-15T01:12:41.059864693Z" level=info msg="StopPodSandbox for \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" returns successfully" May 15 01:12:41.513652 env[1378]: time="2025-05-15T01:12:41.513430497Z" level=info msg="RemovePodSandbox for \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\"" May 15 01:12:41.513652 env[1378]: time="2025-05-15T01:12:41.513455249Z" level=info msg="Forcibly stopping sandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\"" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:41.704 [WARNING][6934] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:41.706 [INFO][6934] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:41.706 [INFO][6934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" iface="eth0" netns="" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:41.706 [INFO][6934] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:41.706 [INFO][6934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.411 [INFO][6941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.413 [INFO][6941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.414 [INFO][6941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.428 [WARNING][6941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.428 [INFO][6941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" HandleID="k8s-pod-network.7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--srwdw-eth0" May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.429 [INFO][6941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:42.439245 env[1378]: 2025-05-15 01:12:42.433 [INFO][6934] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0" May 15 01:12:42.439245 env[1378]: time="2025-05-15T01:12:42.438711446Z" level=info msg="TearDown network for sandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" successfully" May 15 01:12:42.448221 env[1378]: time="2025-05-15T01:12:42.448140717Z" level=info msg="RemovePodSandbox \"7d8d33e562b7194cfbacdc70a709c80465380751fc3286be045e3bf17c62c0a0\" returns successfully" May 15 01:12:42.580390 env[1378]: time="2025-05-15T01:12:42.580348122Z" level=info msg="StopPodSandbox for \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\"" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:42.852 [WARNING][6961] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:42.852 [INFO][6961] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:42.852 [INFO][6961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" iface="eth0" netns="" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:42.852 [INFO][6961] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:42.852 [INFO][6961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.313 [INFO][6968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.318 [INFO][6968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.319 [INFO][6968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.339 [WARNING][6968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.339 [INFO][6968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.344 [INFO][6968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:43.362523 env[1378]: 2025-05-15 01:12:43.355 [INFO][6961] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:43.383824 env[1378]: time="2025-05-15T01:12:43.362538726Z" level=info msg="TearDown network for sandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" successfully" May 15 01:12:43.383824 env[1378]: time="2025-05-15T01:12:43.362560664Z" level=info msg="StopPodSandbox for \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" returns successfully" May 15 01:12:44.300929 kernel: kauditd_printk_skb: 43 callbacks suppressed May 15 01:12:44.331171 kernel: audit: type=1325 audit(1747271564.292:654): table=filter:157 family=2 entries=20 op=nft_register_rule pid=6978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:44.333955 kernel: audit: type=1300 audit(1747271564.292:654): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe07085070 a2=0 a3=7ffe0708505c items=0 ppid=2516 pid=6978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:44.335119 kernel: audit: type=1327 audit(1747271564.292:654): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:44.336034 kernel: audit: type=1325 audit(1747271564.305:655): table=nat:158 family=2 entries=106 op=nft_register_chain pid=6978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:44.339910 kernel: audit: type=1300 audit(1747271564.305:655): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe07085070 a2=0 a3=7ffe0708505c items=0 ppid=2516 pid=6978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:44.339960 kernel: audit: type=1327 audit(1747271564.305:655): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:44.292000 audit[6978]: NETFILTER_CFG table=filter:157 family=2 entries=20 op=nft_register_rule pid=6978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:44.292000 audit[6978]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe07085070 a2=0 a3=7ffe0708505c items=0 ppid=2516 pid=6978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:44.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:44.305000 audit[6978]: NETFILTER_CFG table=nat:158 family=2 entries=106 op=nft_register_chain pid=6978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 01:12:44.305000 audit[6978]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe07085070 a2=0 a3=7ffe0708505c items=0 ppid=2516 pid=6978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:44.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 01:12:45.507189 sshd[6883]: pam_unix(sshd:session): session closed for user core May 15 01:12:45.592241 kernel: audit: type=1106 audit(1747271565.560:656): pid=6883 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.599781 kernel: audit: type=1130 audit(1747271565.561:657): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.104:22-147.75.109.163:52446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:45.599833 kernel: audit: type=1104 audit(1747271565.567:658): pid=6883 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.599852 kernel: audit: type=1131 audit(1747271565.583:659): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.104:22-147.75.109.163:52444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:45.560000 audit[6883]: USER_END pid=6883 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.104:22-147.75.109.163:52446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:45.567000 audit[6883]: CRED_DISP pid=6883 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.104:22-147.75.109.163:52444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:45.562812 systemd[1]: Started sshd@22-139.178.70.104:22-147.75.109.163:52446.service. May 15 01:12:45.584121 systemd[1]: sshd@21-139.178.70.104:22-147.75.109.163:52444.service: Deactivated successfully. May 15 01:12:45.584721 systemd[1]: session-20.scope: Deactivated successfully. May 15 01:12:45.587969 systemd-logind[1361]: Session 20 logged out. Waiting for processes to exit. May 15 01:12:45.590841 systemd-logind[1361]: Removed session 20. May 15 01:12:45.686840 sshd[6980]: Accepted publickey for core from 147.75.109.163 port 52446 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:45.685000 audit[6980]: USER_ACCT pid=6980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.687000 audit[6980]: CRED_ACQ pid=6980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.687000 audit[6980]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb300f4e0 a2=3 a3=0 items=0 ppid=1 pid=6980 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:45.687000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:45.704000 audit[6980]: USER_START pid=6980 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.705000 audit[6985]: CRED_ACQ pid=6985 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:45.691946 sshd[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:45.701352 systemd[1]: Started session-21.scope. May 15 01:12:45.701601 systemd-logind[1361]: New session 21 of user core. May 15 01:12:46.402555 env[1378]: time="2025-05-15T01:12:46.402521084Z" level=info msg="RemovePodSandbox for \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\"" May 15 01:12:46.407787 env[1378]: time="2025-05-15T01:12:46.402553968Z" level=info msg="Forcibly stopping sandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\"" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:46.748 [WARNING][7005] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" WorkloadEndpoint="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:46.751 [INFO][7005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:46.751 [INFO][7005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" iface="eth0" netns="" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:46.751 [INFO][7005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:46.751 [INFO][7005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.029 [INFO][7012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.031 [INFO][7012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.032 [INFO][7012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.044 [WARNING][7012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.044 [INFO][7012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" HandleID="k8s-pod-network.aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" Workload="localhost-k8s-calico--apiserver--68cd77bbfb--stq89-eth0" May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.045 [INFO][7012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 01:12:47.061605 env[1378]: 2025-05-15 01:12:47.052 [INFO][7005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee" May 15 01:12:47.110686 env[1378]: time="2025-05-15T01:12:47.061602041Z" level=info msg="TearDown network for sandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" successfully" May 15 01:12:47.110827 env[1378]: time="2025-05-15T01:12:47.110750663Z" level=info msg="RemovePodSandbox \"aa877e783d5430600bd5a5f9de65dd0a5231c67dbc865827832c1bc354dc7cee\" returns successfully" May 15 01:12:48.876981 kubelet[2379]: E0515 01:12:48.876954 2379 kubelet.go:2511] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.066s" May 15 01:12:50.155567 sshd[6980]: pam_unix(sshd:session): session closed for user core May 15 01:12:50.209954 kernel: kauditd_printk_skb: 7 callbacks suppressed May 15 01:12:50.215576 kernel: audit: type=1106 audit(1747271570.197:665): pid=6980 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:50.217815 kernel: audit: type=1104 audit(1747271570.198:666): pid=6980 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:50.197000 audit[6980]: USER_END pid=6980 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:50.198000 audit[6980]: CRED_DISP pid=6980 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:50.226590 systemd[1]: sshd@22-139.178.70.104:22-147.75.109.163:52446.service: Deactivated successfully. May 15 01:12:50.236984 kernel: audit: type=1131 audit(1747271570.228:667): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.104:22-147.75.109.163:52446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:50.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.104:22-147.75.109.163:52446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:50.236111 systemd[1]: session-21.scope: Deactivated successfully. May 15 01:12:50.236452 systemd-logind[1361]: Session 21 logged out. Waiting for processes to exit. May 15 01:12:50.243226 systemd-logind[1361]: Removed session 21. May 15 01:12:50.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.104:22-14.29.198.130:60178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:50.540199 systemd[1]: Started sshd@23-139.178.70.104:22-14.29.198.130:60178.service. May 15 01:12:50.546036 kernel: audit: type=1130 audit(1747271570.539:668): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.104:22-14.29.198.130:60178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:51.467595 sshd[7022]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.29.198.130 user=root May 15 01:12:51.466000 audit[7022]: USER_AUTH pid=7022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=14.29.198.130 addr=14.29.198.130 terminal=ssh res=failed' May 15 01:12:51.471251 kernel: audit: type=1100 audit(1747271571.466:669): pid=7022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=14.29.198.130 addr=14.29.198.130 terminal=ssh res=failed' May 15 01:12:53.511053 sshd[7022]: Failed password for root from 14.29.198.130 port 60178 ssh2 May 15 01:12:54.302036 sshd[7022]: Received disconnect from 14.29.198.130 port 60178:11: Bye Bye [preauth] May 15 01:12:54.302036 sshd[7022]: Disconnected from authenticating user root 14.29.198.130 port 60178 [preauth] May 15 01:12:54.306369 kernel: audit: type=1131 audit(1747271574.302:670): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.104:22-14.29.198.130:60178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:54.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.104:22-14.29.198.130:60178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:54.303037 systemd[1]: sshd@23-139.178.70.104:22-14.29.198.130:60178.service: Deactivated successfully. May 15 01:12:55.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.104:22-147.75.109.163:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:55.138789 systemd[1]: Started sshd@24-139.178.70.104:22-147.75.109.163:55072.service. May 15 01:12:55.162354 kernel: audit: type=1130 audit(1747271575.137:671): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.104:22-147.75.109.163:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:55.397000 audit[7028]: USER_ACCT pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.425678 kernel: audit: type=1101 audit(1747271575.397:672): pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.432600 kernel: audit: type=1103 audit(1747271575.408:673): pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.435969 kernel: audit: type=1006 audit(1747271575.408:674): pid=7028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 15 01:12:55.436867 kernel: audit: type=1300 audit(1747271575.408:674): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff30731d70 a2=3 a3=0 items=0 ppid=1 pid=7028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:55.438064 kernel: audit: type=1327 audit(1747271575.408:674): proctitle=737368643A20636F7265205B707269765D May 15 01:12:55.408000 audit[7028]: CRED_ACQ pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.408000 audit[7028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff30731d70 a2=3 a3=0 items=0 ppid=1 pid=7028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 01:12:55.408000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 01:12:55.456286 sshd[7028]: Accepted publickey for core from 147.75.109.163 port 55072 ssh2: RSA SHA256:YZM/HFbfbKbCMyxHxRH5w93dQ/AdvMi+wFj0w9zjch4 May 15 01:12:55.433450 sshd[7028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 01:12:55.469060 systemd[1]: Started session-22.scope. May 15 01:12:55.469310 systemd-logind[1361]: New session 22 of user core. May 15 01:12:55.478344 kernel: audit: type=1105 audit(1747271575.472:675): pid=7028 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.472000 audit[7028]: USER_START pid=7028 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.473000 audit[7031]: CRED_ACQ pid=7031 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:55.482283 kernel: audit: type=1103 audit(1747271575.473:676): pid=7031 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:56.447493 sshd[7028]: pam_unix(sshd:session): session closed for user core May 15 01:12:56.465000 audit[7028]: USER_END pid=7028 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:56.479541 kernel: audit: type=1106 audit(1747271576.465:677): pid=7028 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:56.481260 kernel: audit: type=1104 audit(1747271576.467:678): pid=7028 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:56.481473 kernel: audit: type=1131 audit(1747271576.469:679): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.104:22-147.75.109.163:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:56.467000 audit[7028]: CRED_DISP pid=7028 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 15 01:12:56.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.104:22-147.75.109.163:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 01:12:56.470425 systemd[1]: sshd@24-139.178.70.104:22-147.75.109.163:55072.service: Deactivated successfully. May 15 01:12:56.470935 systemd[1]: session-22.scope: Deactivated successfully. May 15 01:12:56.473453 systemd-logind[1361]: Session 22 logged out. Waiting for processes to exit. May 15 01:12:56.477307 systemd-logind[1361]: Removed session 22.