Mar 17 18:53:02.657345 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:53:02.657359 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:53:02.657365 kernel: Disabled fast string operations Mar 17 18:53:02.657369 kernel: BIOS-provided physical RAM map: Mar 17 18:53:02.657373 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Mar 17 18:53:02.657377 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Mar 17 18:53:02.657383 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Mar 17 18:53:02.657387 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Mar 17 18:53:02.657391 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Mar 17 18:53:02.657395 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Mar 17 18:53:02.657399 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Mar 17 18:53:02.657403 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Mar 17 18:53:02.657407 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Mar 17 18:53:02.657411 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Mar 17 18:53:02.657417 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Mar 17 18:53:02.657421 kernel: NX (Execute Disable) protection: active Mar 17 18:53:02.657426 kernel: SMBIOS 2.7 present. Mar 17 18:53:02.657430 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Mar 17 18:53:02.657435 kernel: vmware: hypercall mode: 0x00 Mar 17 18:53:02.657439 kernel: Hypervisor detected: VMware Mar 17 18:53:02.657444 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Mar 17 18:53:02.657449 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Mar 17 18:53:02.657453 kernel: vmware: using clock offset of 4169481544 ns Mar 17 18:53:02.657457 kernel: tsc: Detected 3408.000 MHz processor Mar 17 18:53:02.657462 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:53:02.657467 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:53:02.657472 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Mar 17 18:53:02.657476 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:53:02.657481 kernel: total RAM covered: 3072M Mar 17 18:53:02.657486 kernel: Found optimal setting for mtrr clean up Mar 17 18:53:02.657491 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Mar 17 18:53:02.657496 kernel: Using GB pages for direct mapping Mar 17 18:53:02.657500 kernel: ACPI: Early table checksum verification disabled Mar 17 18:53:02.657505 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Mar 17 18:53:02.657509 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Mar 17 18:53:02.657513 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Mar 17 18:53:02.657518 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Mar 17 18:53:02.657522 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Mar 17 18:53:02.657527 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Mar 17 18:53:02.657532 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Mar 17 18:53:02.657538 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Mar 17 18:53:02.657543 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Mar 17 18:53:02.657548 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Mar 17 18:53:02.657553 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Mar 17 18:53:02.657559 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Mar 17 18:53:02.657564 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Mar 17 18:53:02.657568 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Mar 17 18:53:02.657573 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Mar 17 18:53:02.657578 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Mar 17 18:53:02.657583 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Mar 17 18:53:02.657588 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Mar 17 18:53:02.657592 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Mar 17 18:53:02.657597 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Mar 17 18:53:02.657603 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Mar 17 18:53:02.657608 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Mar 17 18:53:02.657612 kernel: system APIC only can use physical flat Mar 17 18:53:02.657617 kernel: Setting APIC routing to physical flat. Mar 17 18:53:02.657622 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:53:02.657627 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 17 18:53:02.657631 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 17 18:53:02.657636 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 17 18:53:02.657641 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 17 18:53:02.657646 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 17 18:53:02.657651 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 17 18:53:02.657656 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 17 18:53:02.657661 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Mar 17 18:53:02.657665 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Mar 17 18:53:02.657670 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Mar 17 18:53:02.657675 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Mar 17 18:53:02.657680 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Mar 17 18:53:02.657684 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Mar 17 18:53:02.657689 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Mar 17 18:53:02.657695 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Mar 17 18:53:02.657699 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Mar 17 18:53:02.657704 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Mar 17 18:53:02.657709 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Mar 17 18:53:02.657714 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Mar 17 18:53:02.657719 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Mar 17 18:53:02.657729 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Mar 17 18:53:02.657756 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Mar 17 18:53:02.657760 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Mar 17 18:53:02.657765 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Mar 17 18:53:02.657772 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Mar 17 18:53:02.657776 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Mar 17 18:53:02.657796 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Mar 17 18:53:02.657801 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Mar 17 18:53:02.657805 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Mar 17 18:53:02.657810 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Mar 17 18:53:02.657815 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Mar 17 18:53:02.657820 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Mar 17 18:53:02.657825 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Mar 17 18:53:02.657829 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Mar 17 18:53:02.657835 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Mar 17 18:53:02.657840 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Mar 17 18:53:02.657844 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Mar 17 18:53:02.657849 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Mar 17 18:53:02.657854 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Mar 17 18:53:02.657858 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Mar 17 18:53:02.657863 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Mar 17 18:53:02.657868 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Mar 17 18:53:02.657872 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Mar 17 18:53:02.657877 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Mar 17 18:53:02.657883 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Mar 17 18:53:02.657888 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Mar 17 18:53:02.657892 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Mar 17 18:53:02.657897 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Mar 17 18:53:02.657901 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Mar 17 18:53:02.657906 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Mar 17 18:53:02.657911 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Mar 17 18:53:02.657916 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Mar 17 18:53:02.657920 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Mar 17 18:53:02.657925 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Mar 17 18:53:02.657931 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Mar 17 18:53:02.657935 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Mar 17 18:53:02.657940 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Mar 17 18:53:02.657945 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Mar 17 18:53:02.657949 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Mar 17 18:53:02.657962 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Mar 17 18:53:02.657971 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Mar 17 18:53:02.657977 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Mar 17 18:53:02.657982 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Mar 17 18:53:02.657987 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Mar 17 18:53:02.657992 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Mar 17 18:53:02.657998 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Mar 17 18:53:02.658003 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Mar 17 18:53:02.658008 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Mar 17 18:53:02.658013 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Mar 17 18:53:02.658018 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Mar 17 18:53:02.658023 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Mar 17 18:53:02.658028 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Mar 17 18:53:02.658034 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Mar 17 18:53:02.658038 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Mar 17 18:53:02.658043 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Mar 17 18:53:02.658048 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Mar 17 18:53:02.658053 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Mar 17 18:53:02.658059 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Mar 17 18:53:02.658064 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Mar 17 18:53:02.658069 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Mar 17 18:53:02.658074 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Mar 17 18:53:02.658079 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Mar 17 18:53:02.658085 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Mar 17 18:53:02.658089 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Mar 17 18:53:02.658095 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Mar 17 18:53:02.658100 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Mar 17 18:53:02.658104 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Mar 17 18:53:02.658110 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Mar 17 18:53:02.658115 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Mar 17 18:53:02.658120 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Mar 17 18:53:02.658125 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Mar 17 18:53:02.658130 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Mar 17 18:53:02.658136 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Mar 17 18:53:02.658140 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Mar 17 18:53:02.658145 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Mar 17 18:53:02.658151 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Mar 17 18:53:02.658156 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Mar 17 18:53:02.658161 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Mar 17 18:53:02.658166 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Mar 17 18:53:02.658171 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Mar 17 18:53:02.658176 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Mar 17 18:53:02.658181 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Mar 17 18:53:02.658186 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Mar 17 18:53:02.658191 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Mar 17 18:53:02.658197 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Mar 17 18:53:02.658202 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Mar 17 18:53:02.658207 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Mar 17 18:53:02.658212 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Mar 17 18:53:02.658217 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Mar 17 18:53:02.658222 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Mar 17 18:53:02.658227 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Mar 17 18:53:02.658233 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Mar 17 18:53:02.658238 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Mar 17 18:53:02.658243 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Mar 17 18:53:02.658248 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Mar 17 18:53:02.658253 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Mar 17 18:53:02.658258 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Mar 17 18:53:02.658263 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Mar 17 18:53:02.658268 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Mar 17 18:53:02.658273 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Mar 17 18:53:02.658278 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Mar 17 18:53:02.658284 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Mar 17 18:53:02.658289 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Mar 17 18:53:02.658294 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Mar 17 18:53:02.658299 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Mar 17 18:53:02.658304 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Mar 17 18:53:02.658309 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Mar 17 18:53:02.658314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 18:53:02.658319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 18:53:02.658324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Mar 17 18:53:02.658331 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Mar 17 18:53:02.658336 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Mar 17 18:53:02.658341 kernel: Zone ranges: Mar 17 18:53:02.658346 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:53:02.658351 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Mar 17 18:53:02.658356 kernel: Normal empty Mar 17 18:53:02.658362 kernel: Movable zone start for each node Mar 17 18:53:02.658367 kernel: Early memory node ranges Mar 17 18:53:02.658372 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Mar 17 18:53:02.658377 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Mar 17 18:53:02.658383 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Mar 17 18:53:02.658388 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Mar 17 18:53:02.658393 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:53:02.658398 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Mar 17 18:53:02.658403 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Mar 17 18:53:02.658409 kernel: ACPI: PM-Timer IO Port: 0x1008 Mar 17 18:53:02.658414 kernel: system APIC only can use physical flat Mar 17 18:53:02.658419 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Mar 17 18:53:02.658424 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Mar 17 18:53:02.658430 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Mar 17 18:53:02.658435 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Mar 17 18:53:02.658440 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Mar 17 18:53:02.658445 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Mar 17 18:53:02.658450 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Mar 17 18:53:02.658455 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Mar 17 18:53:02.658460 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Mar 17 18:53:02.658465 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Mar 17 18:53:02.658470 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Mar 17 18:53:02.658475 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Mar 17 18:53:02.658481 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Mar 17 18:53:02.658487 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Mar 17 18:53:02.658492 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Mar 17 18:53:02.658497 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Mar 17 18:53:02.658502 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Mar 17 18:53:02.658507 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Mar 17 18:53:02.658512 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Mar 17 18:53:02.658517 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Mar 17 18:53:02.658522 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Mar 17 18:53:02.658528 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Mar 17 18:53:02.658534 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Mar 17 18:53:02.658539 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Mar 17 18:53:02.658544 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Mar 17 18:53:02.658549 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Mar 17 18:53:02.658554 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Mar 17 18:53:02.658559 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Mar 17 18:53:02.658564 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Mar 17 18:53:02.658569 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Mar 17 18:53:02.658574 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Mar 17 18:53:02.658580 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Mar 17 18:53:02.658585 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Mar 17 18:53:02.658590 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Mar 17 18:53:02.658596 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Mar 17 18:53:02.658601 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Mar 17 18:53:02.658606 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Mar 17 18:53:02.658611 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Mar 17 18:53:02.658616 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Mar 17 18:53:02.658621 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Mar 17 18:53:02.658627 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Mar 17 18:53:02.658632 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Mar 17 18:53:02.658637 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Mar 17 18:53:02.658642 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Mar 17 18:53:02.658647 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Mar 17 18:53:02.658652 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Mar 17 18:53:02.658657 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Mar 17 18:53:02.658662 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Mar 17 18:53:02.658667 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Mar 17 18:53:02.658672 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Mar 17 18:53:02.658678 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Mar 17 18:53:02.658683 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Mar 17 18:53:02.658688 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Mar 17 18:53:02.658694 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Mar 17 18:53:02.658699 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Mar 17 18:53:02.658704 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Mar 17 18:53:02.658709 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Mar 17 18:53:02.658714 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Mar 17 18:53:02.658719 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Mar 17 18:53:02.658729 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Mar 17 18:53:02.658737 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Mar 17 18:53:02.658746 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Mar 17 18:53:02.658752 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Mar 17 18:53:02.658757 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Mar 17 18:53:02.658762 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Mar 17 18:53:02.658767 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Mar 17 18:53:02.658772 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Mar 17 18:53:02.658777 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Mar 17 18:53:02.658783 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Mar 17 18:53:02.658788 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Mar 17 18:53:02.658793 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Mar 17 18:53:02.658799 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Mar 17 18:53:02.658804 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Mar 17 18:53:02.658809 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Mar 17 18:53:02.658814 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Mar 17 18:53:02.658819 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Mar 17 18:53:02.658824 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Mar 17 18:53:02.658829 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Mar 17 18:53:02.658835 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Mar 17 18:53:02.658840 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Mar 17 18:53:02.658846 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Mar 17 18:53:02.658851 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Mar 17 18:53:02.658856 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Mar 17 18:53:02.658861 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Mar 17 18:53:02.658866 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Mar 17 18:53:02.658871 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Mar 17 18:53:02.658876 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Mar 17 18:53:02.658882 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Mar 17 18:53:02.658888 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Mar 17 18:53:02.658893 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Mar 17 18:53:02.658898 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Mar 17 18:53:02.658903 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Mar 17 18:53:02.658908 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Mar 17 18:53:02.658913 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Mar 17 18:53:02.658918 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Mar 17 18:53:02.658923 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Mar 17 18:53:02.658928 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Mar 17 18:53:02.658934 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Mar 17 18:53:02.658940 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Mar 17 18:53:02.658945 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Mar 17 18:53:02.658950 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Mar 17 18:53:02.659034 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Mar 17 18:53:02.659042 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Mar 17 18:53:02.659048 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Mar 17 18:53:02.659053 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Mar 17 18:53:02.659058 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Mar 17 18:53:02.659065 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Mar 17 18:53:02.659070 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Mar 17 18:53:02.659075 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Mar 17 18:53:02.659080 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Mar 17 18:53:02.659085 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Mar 17 18:53:02.659090 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Mar 17 18:53:02.659095 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Mar 17 18:53:02.659101 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Mar 17 18:53:02.659106 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Mar 17 18:53:02.659111 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Mar 17 18:53:02.659117 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Mar 17 18:53:02.659122 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Mar 17 18:53:02.659127 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Mar 17 18:53:02.659132 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Mar 17 18:53:02.659137 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Mar 17 18:53:02.659142 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Mar 17 18:53:02.659147 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Mar 17 18:53:02.659152 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Mar 17 18:53:02.659157 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Mar 17 18:53:02.659163 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Mar 17 18:53:02.659168 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Mar 17 18:53:02.659173 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Mar 17 18:53:02.659179 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:53:02.659184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Mar 17 18:53:02.659189 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:53:02.659194 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Mar 17 18:53:02.659199 kernel: TSC deadline timer available Mar 17 18:53:02.659205 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Mar 17 18:53:02.659211 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Mar 17 18:53:02.659216 kernel: Booting paravirtualized kernel on VMware hypervisor Mar 17 18:53:02.659221 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:53:02.659227 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Mar 17 18:53:02.659232 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Mar 17 18:53:02.659237 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Mar 17 18:53:02.659242 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Mar 17 18:53:02.659247 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Mar 17 18:53:02.659252 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Mar 17 18:53:02.659258 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Mar 17 18:53:02.659263 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Mar 17 18:53:02.659268 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Mar 17 18:53:02.659273 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Mar 17 18:53:02.659286 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Mar 17 18:53:02.659292 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Mar 17 18:53:02.659297 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Mar 17 18:53:02.659303 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Mar 17 18:53:02.659308 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Mar 17 18:53:02.659314 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Mar 17 18:53:02.659319 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Mar 17 18:53:02.659325 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Mar 17 18:53:02.659330 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Mar 17 18:53:02.659336 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Mar 17 18:53:02.659341 kernel: Policy zone: DMA32 Mar 17 18:53:02.659347 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:53:02.659353 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:53:02.659360 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Mar 17 18:53:02.659365 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Mar 17 18:53:02.659371 kernel: printk: log_buf_len min size: 262144 bytes Mar 17 18:53:02.659376 kernel: printk: log_buf_len: 1048576 bytes Mar 17 18:53:02.659382 kernel: printk: early log buf free: 239728(91%) Mar 17 18:53:02.659387 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:53:02.659393 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:53:02.659398 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:53:02.659404 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 155976K reserved, 0K cma-reserved) Mar 17 18:53:02.659410 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Mar 17 18:53:02.659416 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:53:02.659422 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:53:02.659428 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:53:02.659434 kernel: rcu: RCU event tracing is enabled. Mar 17 18:53:02.659440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Mar 17 18:53:02.659446 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:53:02.659452 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:53:02.659457 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:53:02.659463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Mar 17 18:53:02.659468 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Mar 17 18:53:02.659474 kernel: random: crng init done Mar 17 18:53:02.659479 kernel: Console: colour VGA+ 80x25 Mar 17 18:53:02.659484 kernel: printk: console [tty0] enabled Mar 17 18:53:02.659491 kernel: printk: console [ttyS0] enabled Mar 17 18:53:02.659496 kernel: ACPI: Core revision 20210730 Mar 17 18:53:02.659502 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Mar 17 18:53:02.659508 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:53:02.659513 kernel: x2apic enabled Mar 17 18:53:02.659519 kernel: Switched APIC routing to physical x2apic. Mar 17 18:53:02.659524 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:53:02.659530 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Mar 17 18:53:02.659536 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Mar 17 18:53:02.659542 kernel: Disabled fast string operations Mar 17 18:53:02.659548 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 17 18:53:02.659553 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 17 18:53:02.659559 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:53:02.659564 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:53:02.659570 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 17 18:53:02.659576 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 17 18:53:02.659581 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 17 18:53:02.659587 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:53:02.659593 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Mar 17 18:53:02.659599 kernel: RETBleed: Mitigation: Enhanced IBRS Mar 17 18:53:02.659604 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:53:02.659610 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:53:02.659615 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:53:02.659621 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 17 18:53:02.659627 kernel: GDS: Unknown: Dependent on hypervisor status Mar 17 18:53:02.659632 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:53:02.659638 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:53:02.659644 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:53:02.659650 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:53:02.659655 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 18:53:02.659662 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:53:02.659667 kernel: pid_max: default: 131072 minimum: 1024 Mar 17 18:53:02.659673 kernel: LSM: Security Framework initializing Mar 17 18:53:02.659678 kernel: SELinux: Initializing. Mar 17 18:53:02.659684 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:53:02.659690 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:53:02.659696 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Mar 17 18:53:02.659702 kernel: Performance Events: Skylake events, core PMU driver. Mar 17 18:53:02.659707 kernel: core: CPUID marked event: 'cpu cycles' unavailable Mar 17 18:53:02.659713 kernel: core: CPUID marked event: 'instructions' unavailable Mar 17 18:53:02.659718 kernel: core: CPUID marked event: 'bus cycles' unavailable Mar 17 18:53:02.659724 kernel: core: CPUID marked event: 'cache references' unavailable Mar 17 18:53:02.659747 kernel: core: CPUID marked event: 'cache misses' unavailable Mar 17 18:53:02.659753 kernel: core: CPUID marked event: 'branch instructions' unavailable Mar 17 18:53:02.659759 kernel: core: CPUID marked event: 'branch misses' unavailable Mar 17 18:53:02.659764 kernel: ... version: 1 Mar 17 18:53:02.659770 kernel: ... bit width: 48 Mar 17 18:53:02.659776 kernel: ... generic registers: 4 Mar 17 18:53:02.659781 kernel: ... value mask: 0000ffffffffffff Mar 17 18:53:02.659787 kernel: ... max period: 000000007fffffff Mar 17 18:53:02.659793 kernel: ... fixed-purpose events: 0 Mar 17 18:53:02.659798 kernel: ... event mask: 000000000000000f Mar 17 18:53:02.659804 kernel: signal: max sigframe size: 1776 Mar 17 18:53:02.659810 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:53:02.659816 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:53:02.659822 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:53:02.659827 kernel: x86: Booting SMP configuration: Mar 17 18:53:02.659833 kernel: .... node #0, CPUs: #1 Mar 17 18:53:02.659839 kernel: Disabled fast string operations Mar 17 18:53:02.659844 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Mar 17 18:53:02.659850 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 17 18:53:02.659855 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:53:02.659861 kernel: smpboot: Max logical packages: 128 Mar 17 18:53:02.659868 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Mar 17 18:53:02.659873 kernel: devtmpfs: initialized Mar 17 18:53:02.659879 kernel: x86/mm: Memory block size: 128MB Mar 17 18:53:02.659885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Mar 17 18:53:02.659890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:53:02.659896 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Mar 17 18:53:02.659902 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:53:02.659907 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:53:02.659913 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:53:02.659919 kernel: audit: type=2000 audit(1742237581.060:1): state=initialized audit_enabled=0 res=1 Mar 17 18:53:02.659925 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:53:02.659931 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:53:02.659936 kernel: cpuidle: using governor menu Mar 17 18:53:02.659942 kernel: Simple Boot Flag at 0x36 set to 0x80 Mar 17 18:53:02.659947 kernel: ACPI: bus type PCI registered Mar 17 18:53:02.659953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:53:02.659965 kernel: dca service started, version 1.12.1 Mar 17 18:53:02.659970 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Mar 17 18:53:02.659978 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Mar 17 18:53:02.659983 kernel: PCI: Using configuration type 1 for base access Mar 17 18:53:02.659989 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:53:02.659995 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:53:02.660000 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:53:02.660006 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:53:02.660011 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:53:02.660017 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:53:02.660024 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:53:02.660030 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:53:02.660036 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:53:02.660041 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:53:02.660047 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:53:02.660053 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Mar 17 18:53:02.660058 kernel: ACPI: Interpreter enabled Mar 17 18:53:02.660064 kernel: ACPI: PM: (supports S0 S1 S5) Mar 17 18:53:02.660069 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:53:02.660075 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:53:02.660082 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Mar 17 18:53:02.660087 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Mar 17 18:53:02.660160 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:53:02.660210 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Mar 17 18:53:02.660256 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Mar 17 18:53:02.660264 kernel: PCI host bridge to bus 0000:00 Mar 17 18:53:02.660310 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:53:02.660354 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Mar 17 18:53:02.660395 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Mar 17 18:53:02.660435 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Mar 17 18:53:02.660475 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Mar 17 18:53:02.660515 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:53:02.660555 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:53:02.660595 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Mar 17 18:53:02.660636 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Mar 17 18:53:02.660690 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Mar 17 18:53:02.660746 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Mar 17 18:53:02.660800 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Mar 17 18:53:02.660850 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Mar 17 18:53:02.660897 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Mar 17 18:53:02.660945 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 18:53:02.661050 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 18:53:02.661101 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 18:53:02.661158 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 18:53:02.661209 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Mar 17 18:53:02.661257 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Mar 17 18:53:02.661338 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Mar 17 18:53:02.663010 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Mar 17 18:53:02.663073 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Mar 17 18:53:02.663126 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Mar 17 18:53:02.663184 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Mar 17 18:53:02.663236 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Mar 17 18:53:02.663286 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Mar 17 18:53:02.663340 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Mar 17 18:53:02.663390 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Mar 17 18:53:02.663460 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:53:02.663516 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Mar 17 18:53:02.663574 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.663626 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.663681 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.663734 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.663786 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.663836 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.663889 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.663940 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664004 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664058 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664112 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664162 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664215 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664265 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664318 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664370 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664424 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664474 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664529 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664578 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664631 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664682 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664741 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664792 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664846 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.664897 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.664953 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665028 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665083 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665135 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665189 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665241 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665295 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665349 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665404 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665455 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665508 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665559 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665613 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665666 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665720 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665771 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665826 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.665879 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.665931 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666048 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666109 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666160 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666215 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666266 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666320 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666371 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666426 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666477 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666531 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666581 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666634 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666685 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666742 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666794 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666846 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.666897 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.666950 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Mar 17 18:53:02.667080 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.667139 kernel: pci_bus 0000:01: extended config space not accessible Mar 17 18:53:02.667192 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 17 18:53:02.667244 kernel: pci_bus 0000:02: extended config space not accessible Mar 17 18:53:02.667252 kernel: acpiphp: Slot [32] registered Mar 17 18:53:02.667258 kernel: acpiphp: Slot [33] registered Mar 17 18:53:02.667264 kernel: acpiphp: Slot [34] registered Mar 17 18:53:02.667270 kernel: acpiphp: Slot [35] registered Mar 17 18:53:02.667275 kernel: acpiphp: Slot [36] registered Mar 17 18:53:02.667282 kernel: acpiphp: Slot [37] registered Mar 17 18:53:02.667288 kernel: acpiphp: Slot [38] registered Mar 17 18:53:02.667294 kernel: acpiphp: Slot [39] registered Mar 17 18:53:02.667299 kernel: acpiphp: Slot [40] registered Mar 17 18:53:02.667305 kernel: acpiphp: Slot [41] registered Mar 17 18:53:02.667311 kernel: acpiphp: Slot [42] registered Mar 17 18:53:02.667317 kernel: acpiphp: Slot [43] registered Mar 17 18:53:02.667322 kernel: acpiphp: Slot [44] registered Mar 17 18:53:02.667328 kernel: acpiphp: Slot [45] registered Mar 17 18:53:02.667334 kernel: acpiphp: Slot [46] registered Mar 17 18:53:02.667340 kernel: acpiphp: Slot [47] registered Mar 17 18:53:02.667346 kernel: acpiphp: Slot [48] registered Mar 17 18:53:02.667351 kernel: acpiphp: Slot [49] registered Mar 17 18:53:02.667357 kernel: acpiphp: Slot [50] registered Mar 17 18:53:02.667362 kernel: acpiphp: Slot [51] registered Mar 17 18:53:02.667368 kernel: acpiphp: Slot [52] registered Mar 17 18:53:02.667374 kernel: acpiphp: Slot [53] registered Mar 17 18:53:02.667380 kernel: acpiphp: Slot [54] registered Mar 17 18:53:02.667385 kernel: acpiphp: Slot [55] registered Mar 17 18:53:02.667392 kernel: acpiphp: Slot [56] registered Mar 17 18:53:02.667398 kernel: acpiphp: Slot [57] registered Mar 17 18:53:02.667403 kernel: acpiphp: Slot [58] registered Mar 17 18:53:02.667409 kernel: acpiphp: Slot [59] registered Mar 17 18:53:02.667414 kernel: acpiphp: Slot [60] registered Mar 17 18:53:02.667420 kernel: acpiphp: Slot [61] registered Mar 17 18:53:02.667426 kernel: acpiphp: Slot [62] registered Mar 17 18:53:02.667431 kernel: acpiphp: Slot [63] registered Mar 17 18:53:02.667480 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Mar 17 18:53:02.667533 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Mar 17 18:53:02.667582 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Mar 17 18:53:02.667632 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 17 18:53:02.667680 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Mar 17 18:53:02.667739 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Mar 17 18:53:02.667790 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Mar 17 18:53:02.667840 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Mar 17 18:53:02.667889 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Mar 17 18:53:02.667940 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Mar 17 18:53:02.671532 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Mar 17 18:53:02.671791 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Mar 17 18:53:02.671857 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Mar 17 18:53:02.671913 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Mar 17 18:53:02.671998 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Mar 17 18:53:02.672054 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Mar 17 18:53:02.672119 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Mar 17 18:53:02.672187 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Mar 17 18:53:02.672253 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Mar 17 18:53:02.672318 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Mar 17 18:53:02.672384 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Mar 17 18:53:02.672445 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Mar 17 18:53:02.672503 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Mar 17 18:53:02.672553 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Mar 17 18:53:02.672620 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Mar 17 18:53:02.672680 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Mar 17 18:53:02.672731 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Mar 17 18:53:02.672794 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Mar 17 18:53:02.672863 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Mar 17 18:53:02.672932 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Mar 17 18:53:02.676510 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Mar 17 18:53:02.676584 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Mar 17 18:53:02.676641 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Mar 17 18:53:02.676706 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Mar 17 18:53:02.676758 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 17 18:53:02.676808 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Mar 17 18:53:02.677789 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Mar 17 18:53:02.677884 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Mar 17 18:53:02.677944 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Mar 17 18:53:02.678028 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Mar 17 18:53:02.678120 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Mar 17 18:53:02.678204 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Mar 17 18:53:02.678279 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Mar 17 18:53:02.678362 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Mar 17 18:53:02.678452 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Mar 17 18:53:02.678524 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Mar 17 18:53:02.678617 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Mar 17 18:53:02.678681 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Mar 17 18:53:02.678735 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Mar 17 18:53:02.678783 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Mar 17 18:53:02.678831 kernel: pci 0000:0b:00.0: supports D1 D2 Mar 17 18:53:02.678880 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:53:02.678928 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Mar 17 18:53:02.679094 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Mar 17 18:53:02.679146 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Mar 17 18:53:02.679193 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Mar 17 18:53:02.679240 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Mar 17 18:53:02.679290 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Mar 17 18:53:02.679341 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Mar 17 18:53:02.679397 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Mar 17 18:53:02.679463 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Mar 17 18:53:02.679519 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Mar 17 18:53:02.679570 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Mar 17 18:53:02.679615 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Mar 17 18:53:02.679663 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Mar 17 18:53:02.679709 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Mar 17 18:53:02.679758 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 17 18:53:02.679805 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Mar 17 18:53:02.679852 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Mar 17 18:53:02.679898 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 17 18:53:02.679945 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Mar 17 18:53:02.680005 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Mar 17 18:53:02.680058 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Mar 17 18:53:02.680108 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Mar 17 18:53:02.680155 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Mar 17 18:53:02.680201 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Mar 17 18:53:02.680248 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Mar 17 18:53:02.680299 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Mar 17 18:53:02.680345 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 17 18:53:02.680401 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Mar 17 18:53:02.680461 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Mar 17 18:53:02.680518 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Mar 17 18:53:02.680577 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 17 18:53:02.680643 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Mar 17 18:53:02.680709 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Mar 17 18:53:02.680775 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Mar 17 18:53:02.680835 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Mar 17 18:53:02.680889 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Mar 17 18:53:02.682253 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Mar 17 18:53:02.682317 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Mar 17 18:53:02.682375 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Mar 17 18:53:02.682429 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Mar 17 18:53:02.682485 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Mar 17 18:53:02.682532 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 17 18:53:02.682582 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Mar 17 18:53:02.682638 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Mar 17 18:53:02.682696 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 17 18:53:02.682758 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Mar 17 18:53:02.682825 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Mar 17 18:53:02.682878 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Mar 17 18:53:02.682928 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Mar 17 18:53:02.684254 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Mar 17 18:53:02.684317 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Mar 17 18:53:02.684375 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Mar 17 18:53:02.684432 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Mar 17 18:53:02.684496 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 17 18:53:02.684557 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Mar 17 18:53:02.684623 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Mar 17 18:53:02.684688 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Mar 17 18:53:02.684740 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Mar 17 18:53:02.684791 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Mar 17 18:53:02.684845 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Mar 17 18:53:02.684908 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Mar 17 18:53:02.684976 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Mar 17 18:53:02.686169 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Mar 17 18:53:02.686228 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Mar 17 18:53:02.686287 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Mar 17 18:53:02.686341 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Mar 17 18:53:02.686720 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Mar 17 18:53:02.686789 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 17 18:53:02.686853 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Mar 17 18:53:02.686910 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Mar 17 18:53:02.689000 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Mar 17 18:53:02.689081 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Mar 17 18:53:02.689140 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Mar 17 18:53:02.689201 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Mar 17 18:53:02.689258 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Mar 17 18:53:02.689312 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Mar 17 18:53:02.689368 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Mar 17 18:53:02.689437 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Mar 17 18:53:02.689500 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Mar 17 18:53:02.689557 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 17 18:53:02.689570 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Mar 17 18:53:02.689580 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Mar 17 18:53:02.689590 kernel: ACPI: PCI: Interrupt link LNKB disabled Mar 17 18:53:02.689599 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:53:02.689608 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Mar 17 18:53:02.689617 kernel: iommu: Default domain type: Translated Mar 17 18:53:02.689629 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:53:02.689688 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Mar 17 18:53:02.689746 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:53:02.689798 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Mar 17 18:53:02.689807 kernel: vgaarb: loaded Mar 17 18:53:02.689815 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:53:02.689822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:53:02.689830 kernel: PTP clock support registered Mar 17 18:53:02.689838 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:53:02.689846 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:53:02.689854 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Mar 17 18:53:02.689860 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Mar 17 18:53:02.689866 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Mar 17 18:53:02.689874 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Mar 17 18:53:02.689883 kernel: clocksource: Switched to clocksource tsc-early Mar 17 18:53:02.689892 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:53:02.689901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:53:02.689906 kernel: pnp: PnP ACPI init Mar 17 18:53:02.689979 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Mar 17 18:53:02.690034 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Mar 17 18:53:02.690086 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Mar 17 18:53:02.690150 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Mar 17 18:53:02.690209 kernel: pnp 00:06: [dma 2] Mar 17 18:53:02.690475 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Mar 17 18:53:02.690535 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Mar 17 18:53:02.690593 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Mar 17 18:53:02.690602 kernel: pnp: PnP ACPI: found 8 devices Mar 17 18:53:02.690609 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:53:02.690618 kernel: NET: Registered PF_INET protocol family Mar 17 18:53:02.690624 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:53:02.690632 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 18:53:02.690638 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:53:02.690646 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:53:02.690652 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 18:53:02.690658 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 18:53:02.690666 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:53:02.690675 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:53:02.690683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:53:02.691047 kernel: NET: Registered PF_XDP protocol family Mar 17 18:53:02.691127 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Mar 17 18:53:02.691198 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 18:53:02.691263 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 18:53:02.691324 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 18:53:02.691391 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 18:53:02.693015 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Mar 17 18:53:02.693106 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Mar 17 18:53:02.693164 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Mar 17 18:53:02.693215 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Mar 17 18:53:02.693265 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Mar 17 18:53:02.693334 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Mar 17 18:53:02.693399 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Mar 17 18:53:02.693466 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Mar 17 18:53:02.693526 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Mar 17 18:53:02.693582 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Mar 17 18:53:02.693655 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Mar 17 18:53:02.693711 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Mar 17 18:53:02.693778 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Mar 17 18:53:02.693844 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Mar 17 18:53:02.693904 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Mar 17 18:53:02.699391 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Mar 17 18:53:02.699483 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Mar 17 18:53:02.699548 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Mar 17 18:53:02.699613 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Mar 17 18:53:02.699673 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Mar 17 18:53:02.699741 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.701057 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.701115 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.703803 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.703861 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.703910 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.703968 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704023 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704087 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704134 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704199 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704247 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704299 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704358 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704425 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704482 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704544 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704598 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704646 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704693 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704741 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704791 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704838 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.704883 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.704936 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.705477 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.705537 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.705587 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.705636 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.705683 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.705731 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.705778 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.705829 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.705876 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.705924 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.705985 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706036 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706082 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706129 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706176 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706225 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706271 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706317 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706363 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706410 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706456 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706502 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706548 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706594 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706641 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706687 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706733 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706780 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706825 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.706872 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.706927 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707010 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707084 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707158 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707237 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707313 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707387 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707462 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707539 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707615 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707691 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707773 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707852 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.707906 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.707963 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.708030 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.708079 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.708136 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.708184 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.709418 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.709472 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.709522 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.709570 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.709621 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.709668 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.709714 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.709760 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.709807 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Mar 17 18:53:02.709865 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Mar 17 18:53:02.709914 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 17 18:53:02.709971 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Mar 17 18:53:02.710023 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Mar 17 18:53:02.710069 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Mar 17 18:53:02.710115 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 17 18:53:02.710166 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Mar 17 18:53:02.710214 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Mar 17 18:53:02.710261 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Mar 17 18:53:02.710307 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Mar 17 18:53:02.710354 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Mar 17 18:53:02.710401 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Mar 17 18:53:02.710450 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Mar 17 18:53:02.710496 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Mar 17 18:53:02.710542 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Mar 17 18:53:02.710590 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Mar 17 18:53:02.710637 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Mar 17 18:53:02.710683 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Mar 17 18:53:02.710734 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Mar 17 18:53:02.710780 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Mar 17 18:53:02.710827 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Mar 17 18:53:02.710872 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Mar 17 18:53:02.710923 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Mar 17 18:53:02.711240 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Mar 17 18:53:02.711310 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 17 18:53:02.711384 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Mar 17 18:53:02.711441 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Mar 17 18:53:02.711490 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Mar 17 18:53:02.711539 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Mar 17 18:53:02.711586 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Mar 17 18:53:02.711632 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Mar 17 18:53:02.711678 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Mar 17 18:53:02.711724 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Mar 17 18:53:02.711770 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Mar 17 18:53:02.711823 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Mar 17 18:53:02.711869 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Mar 17 18:53:02.711914 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Mar 17 18:53:02.711969 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Mar 17 18:53:02.712030 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Mar 17 18:53:02.712104 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Mar 17 18:53:02.712177 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Mar 17 18:53:02.712252 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Mar 17 18:53:02.712326 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Mar 17 18:53:02.712403 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Mar 17 18:53:02.712477 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Mar 17 18:53:02.712552 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Mar 17 18:53:02.712631 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Mar 17 18:53:02.712708 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Mar 17 18:53:02.712783 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Mar 17 18:53:02.712860 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 17 18:53:02.712946 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Mar 17 18:53:02.713027 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Mar 17 18:53:02.713079 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 17 18:53:02.713137 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Mar 17 18:53:02.713187 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Mar 17 18:53:02.713234 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Mar 17 18:53:02.713284 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Mar 17 18:53:02.713337 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Mar 17 18:53:02.713396 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Mar 17 18:53:02.713444 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Mar 17 18:53:02.713492 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Mar 17 18:53:02.713538 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 17 18:53:02.713586 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Mar 17 18:53:02.713633 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Mar 17 18:53:02.713680 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Mar 17 18:53:02.713734 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 17 18:53:02.713782 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Mar 17 18:53:02.713828 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Mar 17 18:53:02.713874 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Mar 17 18:53:02.713920 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Mar 17 18:53:02.714068 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Mar 17 18:53:02.714499 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Mar 17 18:53:02.715055 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Mar 17 18:53:02.715138 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Mar 17 18:53:02.715192 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Mar 17 18:53:02.715244 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Mar 17 18:53:02.715291 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 17 18:53:02.715337 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Mar 17 18:53:02.715384 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Mar 17 18:53:02.715430 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 17 18:53:02.715477 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Mar 17 18:53:02.715523 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Mar 17 18:53:02.715568 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Mar 17 18:53:02.715615 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Mar 17 18:53:02.716068 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Mar 17 18:53:02.716134 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Mar 17 18:53:02.716186 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Mar 17 18:53:02.716359 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Mar 17 18:53:02.716415 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 17 18:53:02.716469 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Mar 17 18:53:02.716521 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Mar 17 18:53:02.716733 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Mar 17 18:53:02.716788 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Mar 17 18:53:02.716842 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Mar 17 18:53:02.716889 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Mar 17 18:53:02.717229 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Mar 17 18:53:02.717282 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Mar 17 18:53:02.717348 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Mar 17 18:53:02.717424 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Mar 17 18:53:02.717498 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Mar 17 18:53:02.717587 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Mar 17 18:53:02.717663 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Mar 17 18:53:02.717745 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 17 18:53:02.717825 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Mar 17 18:53:02.717901 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Mar 17 18:53:02.718007 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Mar 17 18:53:02.718088 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Mar 17 18:53:02.718169 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Mar 17 18:53:02.718222 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Mar 17 18:53:02.718271 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Mar 17 18:53:02.718318 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Mar 17 18:53:02.718364 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Mar 17 18:53:02.718415 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Mar 17 18:53:02.718462 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Mar 17 18:53:02.718508 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 17 18:53:02.718554 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Mar 17 18:53:02.718596 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Mar 17 18:53:02.718636 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Mar 17 18:53:02.718676 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Mar 17 18:53:02.718716 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Mar 17 18:53:02.718759 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Mar 17 18:53:02.718800 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Mar 17 18:53:02.718840 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Mar 17 18:53:02.718885 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Mar 17 18:53:02.718927 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Mar 17 18:53:02.718982 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 17 18:53:02.719027 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Mar 17 18:53:02.719072 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Mar 17 18:53:02.719121 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Mar 17 18:53:02.719167 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Mar 17 18:53:02.719209 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Mar 17 18:53:02.719250 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Mar 17 18:53:02.719292 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Mar 17 18:53:02.719634 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Mar 17 18:53:02.719689 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Mar 17 18:53:02.719737 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Mar 17 18:53:02.720070 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Mar 17 18:53:02.720137 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Mar 17 18:53:02.720184 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Mar 17 18:53:02.720503 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Mar 17 18:53:02.720561 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Mar 17 18:53:02.720608 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Mar 17 18:53:02.720654 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Mar 17 18:53:02.720841 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Mar 17 18:53:02.720889 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Mar 17 18:53:02.720939 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Mar 17 18:53:02.721273 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 17 18:53:02.721343 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Mar 17 18:53:02.721391 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Mar 17 18:53:02.721439 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Mar 17 18:53:02.721484 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Mar 17 18:53:02.721531 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Mar 17 18:53:02.721575 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Mar 17 18:53:02.721621 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Mar 17 18:53:02.721666 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Mar 17 18:53:02.721724 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Mar 17 18:53:02.721772 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Mar 17 18:53:02.721816 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Mar 17 18:53:02.721859 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Mar 17 18:53:02.721907 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Mar 17 18:53:02.721953 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Mar 17 18:53:02.722012 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Mar 17 18:53:02.722062 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Mar 17 18:53:02.722113 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 17 18:53:02.722602 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Mar 17 18:53:02.722681 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 17 18:53:02.723217 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Mar 17 18:53:02.723306 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Mar 17 18:53:02.723377 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Mar 17 18:53:02.723707 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Mar 17 18:53:02.723763 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Mar 17 18:53:02.723807 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 17 18:53:02.723877 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Mar 17 18:53:02.723927 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Mar 17 18:53:02.724050 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 17 18:53:02.724101 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Mar 17 18:53:02.724145 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Mar 17 18:53:02.724187 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Mar 17 18:53:02.724419 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Mar 17 18:53:02.724466 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Mar 17 18:53:02.724512 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Mar 17 18:53:02.724561 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Mar 17 18:53:02.724604 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 17 18:53:02.724650 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Mar 17 18:53:02.724694 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 17 18:53:02.724740 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Mar 17 18:53:02.724786 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Mar 17 18:53:02.724834 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Mar 17 18:53:02.724878 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Mar 17 18:53:02.724923 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Mar 17 18:53:02.724976 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 17 18:53:02.725311 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Mar 17 18:53:02.725364 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Mar 17 18:53:02.725409 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Mar 17 18:53:02.725457 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Mar 17 18:53:02.725502 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Mar 17 18:53:02.725544 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Mar 17 18:53:02.725591 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Mar 17 18:53:02.725636 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Mar 17 18:53:02.725683 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Mar 17 18:53:02.725726 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 17 18:53:02.725781 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Mar 17 18:53:02.726195 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Mar 17 18:53:02.726256 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Mar 17 18:53:02.726309 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Mar 17 18:53:02.726637 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Mar 17 18:53:02.726687 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Mar 17 18:53:02.726736 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Mar 17 18:53:02.726780 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 17 18:53:02.726833 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:53:02.726843 kernel: PCI: CLS 32 bytes, default 64 Mar 17 18:53:02.726852 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:53:02.726858 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Mar 17 18:53:02.726865 kernel: clocksource: Switched to clocksource tsc Mar 17 18:53:02.726871 kernel: Initialise system trusted keyrings Mar 17 18:53:02.726877 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 18:53:02.726883 kernel: Key type asymmetric registered Mar 17 18:53:02.726889 kernel: Asymmetric key parser 'x509' registered Mar 17 18:53:02.726895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:53:02.726902 kernel: io scheduler mq-deadline registered Mar 17 18:53:02.726909 kernel: io scheduler kyber registered Mar 17 18:53:02.726915 kernel: io scheduler bfq registered Mar 17 18:53:02.726972 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Mar 17 18:53:02.727023 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727072 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Mar 17 18:53:02.727120 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727169 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Mar 17 18:53:02.727217 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727267 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Mar 17 18:53:02.727314 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727363 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Mar 17 18:53:02.727410 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727458 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Mar 17 18:53:02.727508 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727570 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Mar 17 18:53:02.727645 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727719 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Mar 17 18:53:02.727806 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.727885 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Mar 17 18:53:02.727978 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.728060 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Mar 17 18:53:02.728159 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.728395 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Mar 17 18:53:02.728470 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.728524 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Mar 17 18:53:02.728859 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.728915 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Mar 17 18:53:02.729138 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.729196 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Mar 17 18:53:02.729246 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.729294 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Mar 17 18:53:02.729624 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.729676 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Mar 17 18:53:02.729726 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.729775 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Mar 17 18:53:02.729822 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.729869 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Mar 17 18:53:02.729918 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.730015 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Mar 17 18:53:02.730066 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.730113 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Mar 17 18:53:02.730161 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.730332 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Mar 17 18:53:02.730386 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.730434 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Mar 17 18:53:02.730762 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.730818 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Mar 17 18:53:02.730867 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.730919 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Mar 17 18:53:02.730978 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.731036 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Mar 17 18:53:02.731084 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.731132 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Mar 17 18:53:02.731388 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.731454 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Mar 17 18:53:02.731508 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.731558 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Mar 17 18:53:02.731826 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.731888 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Mar 17 18:53:02.731944 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.732041 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Mar 17 18:53:02.733303 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.733361 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Mar 17 18:53:02.733411 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.733764 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Mar 17 18:53:02.733822 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 17 18:53:02.733832 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:53:02.733839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:53:02.733847 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:53:02.733854 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Mar 17 18:53:02.733860 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:53:02.733866 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:53:02.734215 kernel: rtc_cmos 00:01: registered as rtc0 Mar 17 18:53:02.734269 kernel: rtc_cmos 00:01: setting system clock to 2025-03-17T18:53:02 UTC (1742237582) Mar 17 18:53:02.734312 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Mar 17 18:53:02.734321 kernel: intel_pstate: CPU model not supported Mar 17 18:53:02.734328 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:53:02.734334 kernel: Segment Routing with IPv6 Mar 17 18:53:02.734340 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:53:02.734347 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:53:02.734353 kernel: Key type dns_resolver registered Mar 17 18:53:02.734359 kernel: IPI shorthand broadcast: enabled Mar 17 18:53:02.734368 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:53:02.734374 kernel: sched_clock: Marking stable (854737788, 223801532)->(1143724271, -65184951) Mar 17 18:53:02.734511 kernel: registered taskstats version 1 Mar 17 18:53:02.734519 kernel: Loading compiled-in X.509 certificates Mar 17 18:53:02.734525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:53:02.734531 kernel: Key type .fscrypt registered Mar 17 18:53:02.734537 kernel: Key type fscrypt-provisioning registered Mar 17 18:53:02.734544 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:53:02.734552 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:53:02.734558 kernel: ima: No architecture policies found Mar 17 18:53:02.734565 kernel: clk: Disabling unused clocks Mar 17 18:53:02.734571 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:53:02.734577 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:53:02.734583 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:53:02.734589 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:53:02.734596 kernel: Run /init as init process Mar 17 18:53:02.734602 kernel: with arguments: Mar 17 18:53:02.734609 kernel: /init Mar 17 18:53:02.734615 kernel: with environment: Mar 17 18:53:02.734621 kernel: HOME=/ Mar 17 18:53:02.734627 kernel: TERM=linux Mar 17 18:53:02.734633 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:53:02.734641 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:53:02.734648 systemd[1]: Detected virtualization vmware. Mar 17 18:53:02.734655 systemd[1]: Detected architecture x86-64. Mar 17 18:53:02.734663 systemd[1]: Running in initrd. Mar 17 18:53:02.734842 systemd[1]: No hostname configured, using default hostname. Mar 17 18:53:02.734851 systemd[1]: Hostname set to . Mar 17 18:53:02.734858 systemd[1]: Initializing machine ID from random generator. Mar 17 18:53:02.734865 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:53:02.734871 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:53:02.734877 systemd[1]: Reached target cryptsetup.target. Mar 17 18:53:02.734884 systemd[1]: Reached target paths.target. Mar 17 18:53:02.734892 systemd[1]: Reached target slices.target. Mar 17 18:53:02.734902 systemd[1]: Reached target swap.target. Mar 17 18:53:02.734909 systemd[1]: Reached target timers.target. Mar 17 18:53:02.734917 systemd[1]: Listening on iscsid.socket. Mar 17 18:53:02.734925 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:53:02.734932 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:53:02.734939 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:53:02.734947 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:53:02.734986 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:53:02.734996 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:53:02.735003 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:53:02.735009 systemd[1]: Reached target sockets.target. Mar 17 18:53:02.735016 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:53:02.735022 systemd[1]: Finished network-cleanup.service. Mar 17 18:53:02.735029 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:53:02.735039 systemd[1]: Starting systemd-journald.service... Mar 17 18:53:02.735046 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:53:02.735054 systemd[1]: Starting systemd-resolved.service... Mar 17 18:53:02.735060 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:53:02.735066 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:53:02.735072 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:53:02.735079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:53:02.735085 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:53:02.735092 kernel: audit: type=1130 audit(1742237582.667:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.735098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:53:02.735104 kernel: audit: type=1130 audit(1742237582.671:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.735112 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:53:02.735119 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:53:02.735125 kernel: audit: type=1130 audit(1742237582.691:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.735131 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:53:02.735138 systemd[1]: Started systemd-resolved.service. Mar 17 18:53:02.735144 kernel: audit: type=1130 audit(1742237582.699:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.735150 systemd[1]: Reached target nss-lookup.target. Mar 17 18:53:02.735158 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:53:02.735165 kernel: Bridge firewalling registered Mar 17 18:53:02.735174 systemd-journald[217]: Journal started Mar 17 18:53:02.735208 systemd-journald[217]: Runtime Journal (/run/log/journal/23b00a50789641c9acf1a884e3da0aec) is 4.8M, max 38.8M, 34.0M free. Mar 17 18:53:02.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.668099 systemd-modules-load[218]: Inserted module 'overlay' Mar 17 18:53:02.740105 systemd[1]: Started systemd-journald.service. Mar 17 18:53:02.740118 kernel: audit: type=1130 audit(1742237582.734:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.696894 systemd-resolved[219]: Positive Trust Anchors: Mar 17 18:53:02.696900 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:53:02.696920 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:53:02.741472 kernel: SCSI subsystem initialized Mar 17 18:53:02.698736 systemd-resolved[219]: Defaulting to hostname 'linux'. Mar 17 18:53:02.725053 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 17 18:53:02.741860 dracut-cmdline[232]: dracut-dracut-053 Mar 17 18:53:02.741860 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Mar 17 18:53:02.741860 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:53:02.750231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:53:02.750252 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:53:02.751433 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:53:02.754123 systemd-modules-load[218]: Inserted module 'dm_multipath' Mar 17 18:53:02.754548 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:53:02.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.755259 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:53:02.757977 kernel: audit: type=1130 audit(1742237582.752:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.761868 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:53:02.764541 kernel: audit: type=1130 audit(1742237582.760:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.772980 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:53:02.784969 kernel: iscsi: registered transport (tcp) Mar 17 18:53:02.799972 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:53:02.800002 kernel: QLogic iSCSI HBA Driver Mar 17 18:53:02.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.816258 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:53:02.816852 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:53:02.819976 kernel: audit: type=1130 audit(1742237582.814:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:02.852970 kernel: raid6: avx2x4 gen() 47846 MB/s Mar 17 18:53:02.869978 kernel: raid6: avx2x4 xor() 19720 MB/s Mar 17 18:53:02.886976 kernel: raid6: avx2x2 gen() 48842 MB/s Mar 17 18:53:02.903969 kernel: raid6: avx2x2 xor() 31558 MB/s Mar 17 18:53:02.920971 kernel: raid6: avx2x1 gen() 44775 MB/s Mar 17 18:53:02.937969 kernel: raid6: avx2x1 xor() 27829 MB/s Mar 17 18:53:02.954979 kernel: raid6: sse2x4 gen() 20595 MB/s Mar 17 18:53:02.971981 kernel: raid6: sse2x4 xor() 11093 MB/s Mar 17 18:53:02.988974 kernel: raid6: sse2x2 gen() 20528 MB/s Mar 17 18:53:03.005973 kernel: raid6: sse2x2 xor() 13331 MB/s Mar 17 18:53:03.022979 kernel: raid6: sse2x1 gen() 18206 MB/s Mar 17 18:53:03.040242 kernel: raid6: sse2x1 xor() 8798 MB/s Mar 17 18:53:03.040283 kernel: raid6: using algorithm avx2x2 gen() 48842 MB/s Mar 17 18:53:03.040291 kernel: raid6: .... xor() 31558 MB/s, rmw enabled Mar 17 18:53:03.041424 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:53:03.050987 kernel: xor: automatically using best checksumming function avx Mar 17 18:53:03.110979 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:53:03.115793 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:53:03.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:03.116462 systemd[1]: Starting systemd-udevd.service... Mar 17 18:53:03.119047 kernel: audit: type=1130 audit(1742237583.113:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:03.114000 audit: BPF prog-id=7 op=LOAD Mar 17 18:53:03.114000 audit: BPF prog-id=8 op=LOAD Mar 17 18:53:03.128068 systemd-udevd[415]: Using default interface naming scheme 'v252'. Mar 17 18:53:03.131529 systemd[1]: Started systemd-udevd.service. Mar 17 18:53:03.132182 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:53:03.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:03.141758 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Mar 17 18:53:03.157861 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:53:03.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:03.158434 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:53:03.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:03.223998 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:53:03.284969 kernel: VMware PVSCSI driver - version 1.0.7.0-k Mar 17 18:53:03.288330 kernel: vmw_pvscsi: using 64bit dma Mar 17 18:53:03.288358 kernel: vmw_pvscsi: max_id: 16 Mar 17 18:53:03.288366 kernel: vmw_pvscsi: setting ring_pages to 8 Mar 17 18:53:03.302969 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:53:03.307804 kernel: vmw_pvscsi: enabling reqCallThreshold Mar 17 18:53:03.307826 kernel: vmw_pvscsi: driver-based request coalescing enabled Mar 17 18:53:03.307834 kernel: vmw_pvscsi: using MSI-X Mar 17 18:53:03.307842 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Mar 17 18:53:03.307944 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Mar 17 18:53:03.314366 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:53:03.315396 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Mar 17 18:53:03.317979 kernel: AES CTR mode by8 optimization enabled Mar 17 18:53:03.322112 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Mar 17 18:53:03.322131 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Mar 17 18:53:03.323594 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Mar 17 18:53:03.324985 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Mar 17 18:53:03.382718 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:53:03.382800 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Mar 17 18:53:03.382870 kernel: sd 0:0:0:0: [sda] Cache data unavailable Mar 17 18:53:03.382928 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Mar 17 18:53:03.383006 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Mar 17 18:53:03.383067 kernel: libata version 3.00 loaded. Mar 17 18:53:03.383075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:53:03.383082 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:53:03.385968 kernel: ata_piix 0000:00:07.1: version 2.13 Mar 17 18:53:03.391580 kernel: scsi host1: ata_piix Mar 17 18:53:03.391649 kernel: scsi host2: ata_piix Mar 17 18:53:03.391709 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Mar 17 18:53:03.391718 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Mar 17 18:53:03.426716 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:53:03.454969 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (463) Mar 17 18:53:03.457969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:53:03.460216 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:53:03.460454 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:53:03.462569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:53:03.463221 systemd[1]: Starting disk-uuid.service... Mar 17 18:53:03.497975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:53:03.506970 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:53:03.556021 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Mar 17 18:53:03.562973 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Mar 17 18:53:03.586335 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Mar 17 18:53:03.602989 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:53:03.603007 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:53:04.511977 disk-uuid[547]: The operation has completed successfully. Mar 17 18:53:04.512219 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:53:04.553240 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:53:04.553297 systemd[1]: Finished disk-uuid.service. Mar 17 18:53:04.553883 systemd[1]: Starting verity-setup.service... Mar 17 18:53:04.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:04.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:04.580970 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:53:04.630808 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:53:04.631707 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:53:04.633272 systemd[1]: Finished verity-setup.service. Mar 17 18:53:04.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:04.712038 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:53:04.712263 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:53:04.712894 systemd[1]: Starting afterburn-network-kargs.service... Mar 17 18:53:04.713393 systemd[1]: Starting ignition-setup.service... Mar 17 18:53:04.802492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:53:04.802542 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:53:04.802554 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:53:04.878975 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:53:04.903671 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:53:04.924898 systemd[1]: Finished ignition-setup.service. Mar 17 18:53:04.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:04.925654 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:53:05.037814 systemd[1]: Finished afterburn-network-kargs.service. Mar 17 18:53:05.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.038620 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:53:05.096270 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:53:05.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.095000 audit: BPF prog-id=9 op=LOAD Mar 17 18:53:05.097532 systemd[1]: Starting systemd-networkd.service... Mar 17 18:53:05.113684 systemd-networkd[735]: lo: Link UP Mar 17 18:53:05.113690 systemd-networkd[735]: lo: Gained carrier Mar 17 18:53:05.114055 systemd-networkd[735]: Enumeration completed Mar 17 18:53:05.116364 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Mar 17 18:53:05.116474 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Mar 17 18:53:05.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.114267 systemd-networkd[735]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Mar 17 18:53:05.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.114423 systemd[1]: Started systemd-networkd.service. Mar 17 18:53:05.114594 systemd[1]: Reached target network.target. Mar 17 18:53:05.115157 systemd[1]: Starting iscsiuio.service... Mar 17 18:53:05.117680 systemd-networkd[735]: ens192: Link UP Mar 17 18:53:05.117683 systemd-networkd[735]: ens192: Gained carrier Mar 17 18:53:05.120543 systemd[1]: Started iscsiuio.service. Mar 17 18:53:05.123155 systemd[1]: Starting iscsid.service... Mar 17 18:53:05.126306 iscsid[740]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:53:05.126306 iscsid[740]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:53:05.126306 iscsid[740]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:53:05.126306 iscsid[740]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:53:05.126306 iscsid[740]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:53:05.126306 iscsid[740]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:53:05.127547 systemd[1]: Started iscsid.service. Mar 17 18:53:05.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.128369 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:53:05.136619 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:53:05.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.137003 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:53:05.137217 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:53:05.137434 systemd[1]: Reached target remote-fs.target. Mar 17 18:53:05.138161 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:53:05.144340 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:53:05.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.188282 ignition[606]: Ignition 2.14.0 Mar 17 18:53:05.188291 ignition[606]: Stage: fetch-offline Mar 17 18:53:05.188333 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:53:05.188352 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Mar 17 18:53:05.197700 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 17 18:53:05.197791 ignition[606]: parsed url from cmdline: "" Mar 17 18:53:05.197794 ignition[606]: no config URL provided Mar 17 18:53:05.197798 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:53:05.197803 ignition[606]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:53:05.198191 ignition[606]: config successfully fetched Mar 17 18:53:05.198210 ignition[606]: parsing config with SHA512: ccd9ecd26ae0c9ad8f212661d08750c4168988509a596979ced3ea772fb8879613d37eea5d73dd9eecef631426f627363de1342315012d0eaf30a770a744c073 Mar 17 18:53:05.201608 unknown[606]: fetched base config from "system" Mar 17 18:53:05.201615 unknown[606]: fetched user config from "vmware" Mar 17 18:53:05.202075 ignition[606]: fetch-offline: fetch-offline passed Mar 17 18:53:05.202159 ignition[606]: Ignition finished successfully Mar 17 18:53:05.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.202857 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:53:05.203021 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:53:05.203598 systemd[1]: Starting ignition-kargs.service... Mar 17 18:53:05.210529 ignition[754]: Ignition 2.14.0 Mar 17 18:53:05.210537 ignition[754]: Stage: kargs Mar 17 18:53:05.210608 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:53:05.210618 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Mar 17 18:53:05.212017 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 17 18:53:05.213527 ignition[754]: kargs: kargs passed Mar 17 18:53:05.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.214455 systemd[1]: Finished ignition-kargs.service. Mar 17 18:53:05.213559 ignition[754]: Ignition finished successfully Mar 17 18:53:05.215393 systemd[1]: Starting ignition-disks.service... Mar 17 18:53:05.219847 ignition[760]: Ignition 2.14.0 Mar 17 18:53:05.220132 ignition[760]: Stage: disks Mar 17 18:53:05.220323 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:53:05.220630 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Mar 17 18:53:05.222132 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 17 18:53:05.223742 ignition[760]: disks: disks passed Mar 17 18:53:05.223782 ignition[760]: Ignition finished successfully Mar 17 18:53:05.224440 systemd[1]: Finished ignition-disks.service. Mar 17 18:53:05.224596 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:53:05.224690 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:53:05.224776 systemd[1]: Reached target local-fs.target. Mar 17 18:53:05.224859 systemd[1]: Reached target sysinit.target. Mar 17 18:53:05.224941 systemd[1]: Reached target basic.target. Mar 17 18:53:05.225518 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:53:05.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.336830 systemd-fsck[768]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 18:53:05.338151 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:53:05.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.338885 systemd[1]: Mounting sysroot.mount... Mar 17 18:53:05.352147 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:53:05.352540 systemd[1]: Mounted sysroot.mount. Mar 17 18:53:05.352683 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:53:05.353687 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:53:05.354044 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:53:05.354068 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:53:05.354082 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:53:05.356033 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:53:05.356692 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:53:05.360054 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:53:05.363848 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:53:05.366654 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:53:05.369426 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:53:05.462994 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:53:05.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.463626 systemd[1]: Starting ignition-mount.service... Mar 17 18:53:05.464156 systemd[1]: Starting sysroot-boot.service... Mar 17 18:53:05.468502 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:53:05.474436 ignition[820]: INFO : Ignition 2.14.0 Mar 17 18:53:05.474691 ignition[820]: INFO : Stage: mount Mar 17 18:53:05.474905 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:53:05.475076 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Mar 17 18:53:05.476862 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 17 18:53:05.478818 ignition[820]: INFO : mount: mount passed Mar 17 18:53:05.480503 ignition[820]: INFO : Ignition finished successfully Mar 17 18:53:05.481132 systemd[1]: Finished ignition-mount.service. Mar 17 18:53:05.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.483909 systemd[1]: Finished sysroot-boot.service. Mar 17 18:53:05.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:05.672988 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:53:05.689989 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (830) Mar 17 18:53:05.694361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:53:05.694380 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:53:05.694389 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:53:05.723990 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:53:05.733092 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:53:05.733849 systemd[1]: Starting ignition-files.service... Mar 17 18:53:05.746733 ignition[850]: INFO : Ignition 2.14.0 Mar 17 18:53:05.746733 ignition[850]: INFO : Stage: files Mar 17 18:53:05.747189 ignition[850]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:53:05.747189 ignition[850]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Mar 17 18:53:05.748726 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 17 18:53:05.751547 ignition[850]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:53:05.753068 ignition[850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:53:05.753068 ignition[850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:53:05.758046 ignition[850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:53:05.758333 ignition[850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:53:05.759189 unknown[850]: wrote ssh authorized keys file for user: core Mar 17 18:53:05.759714 ignition[850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:53:05.760359 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:53:05.760359 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:53:05.760359 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:53:05.760359 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:53:05.807644 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:53:05.939552 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:53:05.939863 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:53:05.940227 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:53:05.940427 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:53:05.940714 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:53:05.940920 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:53:05.941187 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:53:05.941393 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:53:05.941654 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:53:05.942037 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:53:05.942284 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:53:05.942481 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:53:05.942768 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:53:05.943242 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Mar 17 18:53:05.943465 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:53:05.948664 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816131732" Mar 17 18:53:05.948907 ignition[850]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816131732": device or resource busy Mar 17 18:53:05.949140 ignition[850]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3816131732", trying btrfs: device or resource busy Mar 17 18:53:05.949369 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816131732" Mar 17 18:53:05.951204 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816131732" Mar 17 18:53:05.951842 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3816131732" Mar 17 18:53:05.952741 systemd[1]: mnt-oem3816131732.mount: Deactivated successfully. Mar 17 18:53:05.953175 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3816131732" Mar 17 18:53:05.953382 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Mar 17 18:53:05.953603 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:53:05.953881 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:53:06.423300 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Mar 17 18:53:06.584123 systemd-networkd[735]: ens192: Gained IPv6LL Mar 17 18:53:06.602609 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:53:06.608322 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Mar 17 18:53:06.608524 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(12): [started] processing unit "containerd.service" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(12): [finished] processing unit "containerd.service" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Mar 17 18:53:06.608524 ignition[850]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(18): [started] setting preset to enabled for "vmtoolsd.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(18): [finished] setting preset to enabled for "vmtoolsd.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:53:06.609875 ignition[850]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:53:06.695289 ignition[850]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:53:06.695502 ignition[850]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:53:06.695502 ignition[850]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:53:06.695801 ignition[850]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:53:06.695801 ignition[850]: INFO : files: files passed Mar 17 18:53:06.695801 ignition[850]: INFO : Ignition finished successfully Mar 17 18:53:06.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.697400 systemd[1]: Finished ignition-files.service. Mar 17 18:53:06.698018 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:53:06.698144 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:53:06.698564 systemd[1]: Starting ignition-quench.service... Mar 17 18:53:06.701469 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:53:06.701672 systemd[1]: Finished ignition-quench.service. Mar 17 18:53:06.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.702812 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:53:06.703326 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:53:06.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.703629 systemd[1]: Reached target ignition-complete.target. Mar 17 18:53:06.704354 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:53:06.713554 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:53:06.713807 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:53:06.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.714249 systemd[1]: Reached target initrd-fs.target. Mar 17 18:53:06.714478 systemd[1]: Reached target initrd.target. Mar 17 18:53:06.714732 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:53:06.715549 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:53:06.723589 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:53:06.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.724532 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:53:06.731779 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:53:06.732169 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:53:06.732502 systemd[1]: Stopped target timers.target. Mar 17 18:53:06.732776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:53:06.733024 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:53:06.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.733428 systemd[1]: Stopped target initrd.target. Mar 17 18:53:06.733708 systemd[1]: Stopped target basic.target. Mar 17 18:53:06.734037 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:53:06.734341 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:53:06.734626 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:53:06.734914 systemd[1]: Stopped target remote-fs.target. Mar 17 18:53:06.735188 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:53:06.735480 systemd[1]: Stopped target sysinit.target. Mar 17 18:53:06.735759 systemd[1]: Stopped target local-fs.target. Mar 17 18:53:06.736053 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:53:06.736324 systemd[1]: Stopped target swap.target. Mar 17 18:53:06.736571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:53:06.736788 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:53:06.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.737152 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:53:06.737403 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:53:06.737621 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:53:06.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.737974 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:53:06.738203 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:53:06.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.738583 systemd[1]: Stopped target paths.target. Mar 17 18:53:06.738822 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:53:06.741991 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:53:06.742319 systemd[1]: Stopped target slices.target. Mar 17 18:53:06.742602 systemd[1]: Stopped target sockets.target. Mar 17 18:53:06.742865 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:53:06.743080 systemd[1]: Closed iscsid.socket. Mar 17 18:53:06.743357 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:53:06.743619 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:53:06.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.743986 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:53:06.744210 systemd[1]: Stopped ignition-files.service. Mar 17 18:53:06.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.745112 systemd[1]: Stopping ignition-mount.service... Mar 17 18:53:06.746004 systemd[1]: Stopping iscsiuio.service... Mar 17 18:53:06.750012 ignition[889]: INFO : Ignition 2.14.0 Mar 17 18:53:06.750537 ignition[889]: INFO : Stage: umount Mar 17 18:53:06.750734 ignition[889]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:53:06.750897 ignition[889]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Mar 17 18:53:06.752919 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 17 18:53:06.753463 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:53:06.753704 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:53:06.753944 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:53:06.754269 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:53:06.757717 kernel: kauditd_printk_skb: 37 callbacks suppressed Mar 17 18:53:06.757742 kernel: audit: type=1131 audit(1742237586.752:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.757884 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:53:06.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.759832 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:53:06.760073 systemd[1]: Stopped iscsiuio.service. Mar 17 18:53:06.761973 kernel: audit: type=1131 audit(1742237586.756:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.762241 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:53:06.762424 systemd[1]: Closed iscsiuio.socket. Mar 17 18:53:06.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.765094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:53:06.765187 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:53:06.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.768399 kernel: audit: type=1131 audit(1742237586.760:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.768424 kernel: audit: type=1130 audit(1742237586.763:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.769515 ignition[889]: INFO : umount: umount passed Mar 17 18:53:06.771049 kernel: audit: type=1131 audit(1742237586.763:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.771062 ignition[889]: INFO : Ignition finished successfully Mar 17 18:53:06.771374 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:53:06.771583 systemd[1]: Stopped ignition-mount.service. Mar 17 18:53:06.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.771872 systemd[1]: Stopped target network.target. Mar 17 18:53:06.774983 kernel: audit: type=1131 audit(1742237586.769:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.774903 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:53:06.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.774932 systemd[1]: Stopped ignition-disks.service. Mar 17 18:53:06.775083 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:53:06.775105 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:53:06.778074 kernel: audit: type=1131 audit(1742237586.773:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.778207 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:53:06.781108 kernel: audit: type=1131 audit(1742237586.776:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.778232 systemd[1]: Stopped ignition-setup.service. Mar 17 18:53:06.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.781547 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:53:06.784022 kernel: audit: type=1131 audit(1742237586.779:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.784042 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:53:06.786750 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:53:06.786813 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:53:06.787065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:53:06.790110 kernel: audit: type=1131 audit(1742237586.785:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.787087 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:53:06.790727 systemd[1]: Stopping network-cleanup.service... Mar 17 18:53:06.791052 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:53:06.791225 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:53:06.791480 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Mar 17 18:53:06.791670 systemd[1]: Stopped afterburn-network-kargs.service. Mar 17 18:53:06.789000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:53:06.791971 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:53:06.792129 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:53:06.792507 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:53:06.792675 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:53:06.792939 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:53:06.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.795490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:53:06.795546 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:53:06.795906 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:53:06.796044 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:53:06.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.797310 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:53:06.797385 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:53:06.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.798130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:53:06.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.797000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:53:06.798157 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:53:06.798274 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:53:06.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.798296 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:53:06.798394 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:53:06.798418 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:53:06.798536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:53:06.798556 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:53:06.798658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:53:06.798677 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:53:06.799716 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:53:06.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.803411 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:53:06.803442 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:53:06.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.803654 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:53:06.803682 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:53:06.803814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:53:06.803835 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:53:06.804503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:53:06.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.804826 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:53:06.804884 systemd[1]: Stopped network-cleanup.service. Mar 17 18:53:06.805272 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:53:06.805327 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:53:06.913564 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:53:06.913657 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:53:06.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.914050 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:53:06.914193 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:53:06.914226 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:53:06.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:06.914951 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:53:06.958433 systemd[1]: Switching root. Mar 17 18:53:06.957000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:53:06.957000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:53:06.957000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:53:06.957000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:53:06.958000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:53:06.978990 iscsid[740]: iscsid shutting down. Mar 17 18:53:06.979194 systemd-journald[217]: Journal stopped Mar 17 18:53:11.030530 systemd-journald[217]: Received SIGTERM from PID 1 (n/a). Mar 17 18:53:11.030548 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:53:11.030557 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:53:11.030563 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:53:11.030569 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:53:11.030576 kernel: SELinux: policy capability open_perms=1 Mar 17 18:53:11.030582 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:53:11.030588 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:53:11.030594 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:53:11.030600 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:53:11.030605 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:53:11.030611 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:53:11.030619 systemd[1]: Successfully loaded SELinux policy in 42.139ms. Mar 17 18:53:11.030626 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.218ms. Mar 17 18:53:11.030634 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:53:11.030641 systemd[1]: Detected virtualization vmware. Mar 17 18:53:11.030648 systemd[1]: Detected architecture x86-64. Mar 17 18:53:11.030655 systemd[1]: Detected first boot. Mar 17 18:53:11.030662 systemd[1]: Initializing machine ID from random generator. Mar 17 18:53:11.030669 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:53:11.030675 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:53:11.030682 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:53:11.030689 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:53:11.030697 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:53:11.030705 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:53:11.030712 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:53:11.030718 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:53:11.030725 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:53:11.030732 systemd[1]: Created slice system-getty.slice. Mar 17 18:53:11.030738 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:53:11.030745 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:53:11.030753 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:53:11.030759 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:53:11.030766 systemd[1]: Created slice user.slice. Mar 17 18:53:11.030772 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:53:11.030779 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:53:11.030785 systemd[1]: Set up automount boot.automount. Mar 17 18:53:11.030792 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:53:11.030798 systemd[1]: Reached target integritysetup.target. Mar 17 18:53:11.030805 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:53:11.030814 systemd[1]: Reached target remote-fs.target. Mar 17 18:53:11.030822 systemd[1]: Reached target slices.target. Mar 17 18:53:11.030828 systemd[1]: Reached target swap.target. Mar 17 18:53:11.030835 systemd[1]: Reached target torcx.target. Mar 17 18:53:11.030842 systemd[1]: Reached target veritysetup.target. Mar 17 18:53:11.030849 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:53:11.030855 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:53:11.030863 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:53:11.030870 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:53:11.030878 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:53:11.030884 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:53:11.030891 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:53:11.030898 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:53:11.030905 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:53:11.030913 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:53:11.030920 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:53:11.030928 systemd[1]: Mounting media.mount... Mar 17 18:53:11.030935 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:53:11.030942 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:53:11.030949 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:53:11.030963 systemd[1]: Mounting tmp.mount... Mar 17 18:53:11.030974 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:53:11.030981 systemd[1]: Starting ignition-delete-config.service... Mar 17 18:53:11.030988 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:53:11.030995 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:53:11.031002 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:53:11.031009 systemd[1]: Starting modprobe@drm.service... Mar 17 18:53:11.031016 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:53:11.031023 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:53:11.031030 systemd[1]: Starting modprobe@loop.service... Mar 17 18:53:11.031038 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:53:11.031045 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:53:11.031053 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:53:11.031059 kernel: fuse: init (API version 7.34) Mar 17 18:53:11.031066 systemd[1]: Starting systemd-journald.service... Mar 17 18:53:11.031073 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:53:11.031079 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:53:11.031086 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:53:11.031094 kernel: loop: module loaded Mar 17 18:53:11.031102 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:53:11.031109 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:53:11.031116 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:53:11.031123 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:53:11.031130 systemd[1]: Mounted media.mount. Mar 17 18:53:11.031136 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:53:11.031143 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:53:11.031150 systemd[1]: Mounted tmp.mount. Mar 17 18:53:11.031158 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:53:11.031165 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:53:11.031172 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:53:11.031179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:53:11.031185 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:53:11.031192 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:53:11.031200 systemd[1]: Finished modprobe@drm.service. Mar 17 18:53:11.031206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:53:11.031213 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:53:11.031222 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:53:11.031229 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:53:11.031236 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:53:11.031243 systemd[1]: Finished modprobe@loop.service. Mar 17 18:53:11.031250 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:53:11.031257 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:53:11.031264 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:53:11.031271 systemd[1]: Reached target network-pre.target. Mar 17 18:53:11.031278 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:53:11.031286 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:53:11.031293 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:53:11.031302 systemd-journald[1038]: Journal started Mar 17 18:53:11.031329 systemd-journald[1038]: Runtime Journal (/run/log/journal/06c3d0f490be457fb199af3bcc87ff15) is 4.8M, max 38.8M, 34.0M free. Mar 17 18:53:11.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.031873 jq[1018]: true Mar 17 18:53:11.026000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:53:11.026000 audit[1038]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd2ae4da40 a2=4000 a3=7ffd2ae4dadc items=0 ppid=1 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:53:11.026000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:53:11.032447 jq[1063]: true Mar 17 18:53:11.062904 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:53:11.062941 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:53:11.065840 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:53:11.065859 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:53:11.068091 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:53:11.068965 systemd[1]: Started systemd-journald.service. Mar 17 18:53:11.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.071905 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:53:11.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.072383 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:53:11.072525 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:53:11.073504 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:53:11.074408 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:53:11.089667 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:53:11.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.089868 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:53:11.092717 systemd-journald[1038]: Time spent on flushing to /var/log/journal/06c3d0f490be457fb199af3bcc87ff15 is 47.386ms for 1953 entries. Mar 17 18:53:11.092717 systemd-journald[1038]: System Journal (/var/log/journal/06c3d0f490be457fb199af3bcc87ff15) is 8.0M, max 584.8M, 576.8M free. Mar 17 18:53:11.162148 systemd-journald[1038]: Received client request to flush runtime journal. Mar 17 18:53:11.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.103877 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:53:11.114094 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:53:11.115174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:53:11.159185 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:53:11.160350 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:53:11.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.163138 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:53:11.166982 udevadm[1106]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:53:11.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.186309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:53:11.355321 ignition[1089]: Ignition 2.14.0 Mar 17 18:53:11.355487 ignition[1089]: deleting config from guestinfo properties Mar 17 18:53:11.360498 ignition[1089]: Successfully deleted config Mar 17 18:53:11.361156 systemd[1]: Finished ignition-delete-config.service. Mar 17 18:53:11.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.641179 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:53:11.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.642245 systemd[1]: Starting systemd-udevd.service... Mar 17 18:53:11.655239 systemd-udevd[1111]: Using default interface naming scheme 'v252'. Mar 17 18:53:11.772097 kernel: kauditd_printk_skb: 66 callbacks suppressed Mar 17 18:53:11.772172 kernel: audit: type=1130 audit(1742237591.762:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.763905 systemd[1]: Started systemd-udevd.service. Mar 17 18:53:11.765361 systemd[1]: Starting systemd-networkd.service... Mar 17 18:53:11.778824 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:53:11.793026 systemd[1]: Found device dev-ttyS0.device. Mar 17 18:53:11.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.819973 kernel: audit: type=1130 audit(1742237591.815:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.816876 systemd[1]: Started systemd-userdbd.service. Mar 17 18:53:11.862000 audit[1117]: AVC avc: denied { confidentiality } for pid=1117 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:53:11.871532 kernel: audit: type=1400 audit(1742237591.862:117): avc: denied { confidentiality } for pid=1117 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:53:11.871583 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:53:11.875475 kernel: audit: type=1300 audit(1742237591.862:117): arch=c000003e syscall=175 success=yes exit=0 a0=55e6b86d5390 a1=338ac a2=7f9fcaa9dbc5 a3=5 items=110 ppid=1111 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:53:11.862000 audit[1117]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e6b86d5390 a1=338ac a2=7f9fcaa9dbc5 a3=5 items=110 ppid=1111 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:53:11.876312 kernel: audit: type=1307 audit(1742237591.862:117): cwd="/" Mar 17 18:53:11.862000 audit: CWD cwd="/" Mar 17 18:53:11.880939 kernel: audit: type=1302 audit(1742237591.862:117): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.880998 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:53:11.862000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=1 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.886313 kernel: audit: type=1302 audit(1742237591.862:117): item=1 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.886341 kernel: audit: type=1302 audit(1742237591.862:117): item=2 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=2 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.888908 kernel: audit: type=1302 audit(1742237591.862:117): item=3 name=(null) inode=24907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=3 name=(null) inode=24907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.893143 kernel: audit: type=1302 audit(1742237591.862:117): item=4 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=4 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=5 name=(null) inode=24908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=6 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=7 name=(null) inode=24909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=8 name=(null) inode=24909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=9 name=(null) inode=24910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=10 name=(null) inode=24909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=11 name=(null) inode=24911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=12 name=(null) inode=24909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=13 name=(null) inode=24912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=14 name=(null) inode=24909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=15 name=(null) inode=24913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=16 name=(null) inode=24909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=17 name=(null) inode=24914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=18 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=19 name=(null) inode=24915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=20 name=(null) inode=24915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=21 name=(null) inode=24916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=22 name=(null) inode=24915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=23 name=(null) inode=24917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=24 name=(null) inode=24915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=25 name=(null) inode=24918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=26 name=(null) inode=24915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=27 name=(null) inode=24919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=28 name=(null) inode=24915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=29 name=(null) inode=24920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=30 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=31 name=(null) inode=24921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=32 name=(null) inode=24921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=33 name=(null) inode=24922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=34 name=(null) inode=24921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=35 name=(null) inode=24923 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=36 name=(null) inode=24921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=37 name=(null) inode=24924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=38 name=(null) inode=24921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=39 name=(null) inode=24925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=40 name=(null) inode=24921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=41 name=(null) inode=24926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=42 name=(null) inode=24906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=43 name=(null) inode=24927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=44 name=(null) inode=24927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=45 name=(null) inode=24928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=46 name=(null) inode=24927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=47 name=(null) inode=24929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=48 name=(null) inode=24927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=49 name=(null) inode=24930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=50 name=(null) inode=24927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=51 name=(null) inode=24931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=52 name=(null) inode=24927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=53 name=(null) inode=24932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=55 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=56 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=57 name=(null) inode=24934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=58 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=59 name=(null) inode=24935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=60 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=61 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=62 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=63 name=(null) inode=24937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=64 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=65 name=(null) inode=24938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.895406 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Mar 17 18:53:11.895855 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Mar 17 18:53:11.895934 kernel: Guest personality initialized and is active Mar 17 18:53:11.862000 audit: PATH item=66 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=67 name=(null) inode=24939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=68 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=69 name=(null) inode=24940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=70 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=71 name=(null) inode=24941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=72 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=73 name=(null) inode=24942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=74 name=(null) inode=24942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=75 name=(null) inode=24943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=76 name=(null) inode=24942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=77 name=(null) inode=24944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=78 name=(null) inode=24942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=79 name=(null) inode=24945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=80 name=(null) inode=24942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=81 name=(null) inode=24946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=82 name=(null) inode=24942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=83 name=(null) inode=24947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=84 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=85 name=(null) inode=24948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=86 name=(null) inode=24948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=87 name=(null) inode=24949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=88 name=(null) inode=24948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=89 name=(null) inode=24950 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=90 name=(null) inode=24948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=91 name=(null) inode=24951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=92 name=(null) inode=24948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=93 name=(null) inode=24952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=94 name=(null) inode=24948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=95 name=(null) inode=24953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=96 name=(null) inode=24933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=97 name=(null) inode=24954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=98 name=(null) inode=24954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=99 name=(null) inode=24955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=100 name=(null) inode=24954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=101 name=(null) inode=24956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=102 name=(null) inode=24954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=103 name=(null) inode=24957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=104 name=(null) inode=24954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=105 name=(null) inode=24958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=106 name=(null) inode=24954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=107 name=(null) inode=24959 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PATH item=109 name=(null) inode=24960 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:53:11.862000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:53:11.897386 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 18:53:11.897406 kernel: Initialized host personality Mar 17 18:53:11.908971 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Mar 17 18:53:11.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:11.947695 systemd-networkd[1121]: lo: Link UP Mar 17 18:53:11.947701 systemd-networkd[1121]: lo: Gained carrier Mar 17 18:53:11.948005 systemd-networkd[1121]: Enumeration completed Mar 17 18:53:11.948087 systemd[1]: Started systemd-networkd.service. Mar 17 18:53:11.948606 systemd-networkd[1121]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Mar 17 18:53:11.951672 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Mar 17 18:53:11.951794 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Mar 17 18:53:11.951985 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Mar 17 18:53:11.953064 systemd-networkd[1121]: ens192: Link UP Mar 17 18:53:11.953225 systemd-networkd[1121]: ens192: Gained carrier Mar 17 18:53:11.965120 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:53:11.971966 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:53:11.985804 (udev-worker)[1126]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Mar 17 18:53:11.989969 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:53:12.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.007184 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:53:12.008199 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:53:12.050371 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:53:12.072608 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:53:12.072812 systemd[1]: Reached target cryptsetup.target. Mar 17 18:53:12.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.073984 systemd[1]: Starting lvm2-activation.service... Mar 17 18:53:12.077310 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:53:12.095705 systemd[1]: Finished lvm2-activation.service. Mar 17 18:53:12.095910 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:53:12.096033 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:53:12.096055 systemd[1]: Reached target local-fs.target. Mar 17 18:53:12.096157 systemd[1]: Reached target machines.target. Mar 17 18:53:12.097281 systemd[1]: Starting ldconfig.service... Mar 17 18:53:12.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.098115 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.098156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:53:12.099106 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:53:12.099851 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:53:12.101278 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:53:12.102831 systemd[1]: Starting systemd-sysext.service... Mar 17 18:53:12.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.109178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:53:12.114897 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1150 (bootctl) Mar 17 18:53:12.116126 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:53:12.119888 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:53:12.122191 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:53:12.122354 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:53:12.137973 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:53:12.162716 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:53:12.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.163543 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:53:12.193977 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:53:12.211981 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:53:12.476837 (sd-sysext)[1167]: Using extensions 'kubernetes'. Mar 17 18:53:12.477146 (sd-sysext)[1167]: Merged extensions into '/usr'. Mar 17 18:53:12.492256 systemd-fsck[1164]: fsck.fat 4.2 (2021-01-31) Mar 17 18:53:12.492256 systemd-fsck[1164]: /dev/sda1: 789 files, 119299/258078 clusters Mar 17 18:53:12.492558 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:53:12.493807 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:53:12.495606 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:53:12.496562 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:53:12.497582 systemd[1]: Starting modprobe@loop.service... Mar 17 18:53:12.498927 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.499076 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:53:12.499158 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:53:12.500596 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:53:12.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.505263 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:53:12.506285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:53:12.506392 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:53:12.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.508387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:53:12.508486 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:53:12.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.511254 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:53:12.511345 systemd[1]: Finished modprobe@loop.service. Mar 17 18:53:12.512701 systemd[1]: Mounting boot.mount... Mar 17 18:53:12.512813 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:53:12.512853 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.513065 systemd[1]: Finished systemd-sysext.service. Mar 17 18:53:12.514508 systemd[1]: Starting ensure-sysext.service... Mar 17 18:53:12.515417 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:53:12.519295 systemd[1]: Reloading. Mar 17 18:53:12.550430 systemd-tmpfiles[1185]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:53:12.557985 /usr/lib/systemd/system-generators/torcx-generator[1204]: time="2025-03-17T18:53:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:53:12.558002 /usr/lib/systemd/system-generators/torcx-generator[1204]: time="2025-03-17T18:53:12Z" level=info msg="torcx already run" Mar 17 18:53:12.561217 systemd-tmpfiles[1185]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:53:12.569465 systemd-tmpfiles[1185]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:53:12.627518 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:53:12.627530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:53:12.640421 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:53:12.673689 systemd[1]: Mounted boot.mount. Mar 17 18:53:12.680105 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:53:12.680971 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:53:12.681723 systemd[1]: Starting modprobe@loop.service... Mar 17 18:53:12.681897 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.682104 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:53:12.682536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:53:12.682615 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:53:12.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.683119 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:53:12.683192 systemd[1]: Finished modprobe@loop.service. Mar 17 18:53:12.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.683563 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.684945 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:53:12.685874 systemd[1]: Starting modprobe@loop.service... Mar 17 18:53:12.686160 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.686238 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:53:12.686574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:53:12.686651 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:53:12.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.688671 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:53:12.688750 systemd[1]: Finished modprobe@loop.service. Mar 17 18:53:12.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.690222 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.693799 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:53:12.694707 systemd[1]: Starting modprobe@drm.service... Mar 17 18:53:12.696137 systemd[1]: Starting modprobe@loop.service... Mar 17 18:53:12.696344 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.696425 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:53:12.697345 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:53:12.697942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:53:12.698156 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:53:12.698488 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:53:12.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.698565 systemd[1]: Finished modprobe@drm.service. Mar 17 18:53:12.698869 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:53:12.698942 systemd[1]: Finished modprobe@loop.service. Mar 17 18:53:12.699326 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:53:12.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.700316 systemd[1]: Finished ensure-sysext.service. Mar 17 18:53:12.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.700531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:53:12.700625 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:53:12.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:12.700771 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:53:12.753987 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:53:12.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.036709 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:53:13.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.038010 systemd[1]: Starting audit-rules.service... Mar 17 18:53:13.039118 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:53:13.040419 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:53:13.042758 systemd[1]: Starting systemd-resolved.service... Mar 17 18:53:13.045624 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:53:13.048317 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:53:13.050429 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:53:13.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.050848 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:53:13.057000 audit[1300]: SYSTEM_BOOT pid=1300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.060691 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:53:13.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.109385 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:53:13.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.109580 systemd[1]: Reached target time-set.target. Mar 17 18:53:13.121663 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:53:13.121677 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:53:13.122213 systemd-resolved[1298]: Positive Trust Anchors: Mar 17 18:53:13.122221 systemd-resolved[1298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:53:13.122241 systemd-resolved[1298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:53:13.129928 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:53:13.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:53:13.132193 augenrules[1317]: No rules Mar 17 18:53:13.130000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:53:13.130000 audit[1317]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa83942b0 a2=420 a3=0 items=0 ppid=1294 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:53:13.130000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:53:13.132841 systemd[1]: Finished audit-rules.service. Mar 17 18:53:13.146884 systemd-resolved[1298]: Defaulting to hostname 'linux'. Mar 17 18:53:13.148842 systemd[1]: Started systemd-resolved.service. Mar 17 18:53:13.148999 systemd[1]: Reached target network.target. Mar 17 18:53:13.149090 systemd[1]: Reached target nss-lookup.target. Mar 17 18:54:29.641554 systemd-resolved[1298]: Clock change detected. Flushing caches. Mar 17 18:54:29.641749 systemd-timesyncd[1299]: Contacted time server 45.33.83.31:123 (0.flatcar.pool.ntp.org). Mar 17 18:54:29.642003 systemd-timesyncd[1299]: Initial clock synchronization to Mon 2025-03-17 18:54:29.641510 UTC. Mar 17 18:54:29.646827 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:54:29.661120 systemd[1]: Finished ldconfig.service. Mar 17 18:54:29.662377 systemd[1]: Starting systemd-update-done.service... Mar 17 18:54:29.666632 systemd[1]: Finished systemd-update-done.service. Mar 17 18:54:29.666808 systemd[1]: Reached target sysinit.target. Mar 17 18:54:29.666946 systemd[1]: Started motdgen.path. Mar 17 18:54:29.667045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:54:29.667228 systemd[1]: Started logrotate.timer. Mar 17 18:54:29.667367 systemd[1]: Started mdadm.timer. Mar 17 18:54:29.667456 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:54:29.667551 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:54:29.667568 systemd[1]: Reached target paths.target. Mar 17 18:54:29.667649 systemd[1]: Reached target timers.target. Mar 17 18:54:29.667909 systemd[1]: Listening on dbus.socket. Mar 17 18:54:29.668929 systemd[1]: Starting docker.socket... Mar 17 18:54:29.670186 systemd[1]: Listening on sshd.socket. Mar 17 18:54:29.670341 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:54:29.670627 systemd[1]: Listening on docker.socket. Mar 17 18:54:29.670827 systemd[1]: Reached target sockets.target. Mar 17 18:54:29.670918 systemd[1]: Reached target basic.target. Mar 17 18:54:29.671109 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:54:29.671145 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:54:29.671170 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:54:29.672048 systemd[1]: Starting containerd.service... Mar 17 18:54:29.673014 systemd[1]: Starting dbus.service... Mar 17 18:54:29.673931 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:54:29.674820 systemd[1]: Starting extend-filesystems.service... Mar 17 18:54:29.674950 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:54:29.675816 systemd[1]: Starting motdgen.service... Mar 17 18:54:29.676821 systemd[1]: Starting prepare-helm.service... Mar 17 18:54:29.677828 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:54:29.678732 systemd[1]: Starting sshd-keygen.service... Mar 17 18:54:29.680391 systemd[1]: Starting systemd-logind.service... Mar 17 18:54:29.680655 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:54:29.680728 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:54:29.681482 systemd[1]: Starting update-engine.service... Mar 17 18:54:29.683254 jq[1331]: false Mar 17 18:54:29.684559 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:54:29.687358 systemd[1]: Starting vmtoolsd.service... Mar 17 18:54:29.693724 jq[1344]: true Mar 17 18:54:29.690968 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:54:29.691132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:54:29.695106 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:54:29.695231 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:54:29.703830 jq[1351]: true Mar 17 18:54:29.719239 systemd[1]: Started vmtoolsd.service. Mar 17 18:54:29.720939 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:54:29.721081 systemd[1]: Finished motdgen.service. Mar 17 18:54:29.736698 extend-filesystems[1333]: Found loop1 Mar 17 18:54:29.737355 extend-filesystems[1333]: Found sda Mar 17 18:54:29.737517 extend-filesystems[1333]: Found sda1 Mar 17 18:54:29.737676 extend-filesystems[1333]: Found sda2 Mar 17 18:54:29.737816 extend-filesystems[1333]: Found sda3 Mar 17 18:54:29.739367 extend-filesystems[1333]: Found usr Mar 17 18:54:29.739521 extend-filesystems[1333]: Found sda4 Mar 17 18:54:29.739811 extend-filesystems[1333]: Found sda6 Mar 17 18:54:29.739811 extend-filesystems[1333]: Found sda7 Mar 17 18:54:29.739811 extend-filesystems[1333]: Found sda9 Mar 17 18:54:29.739811 extend-filesystems[1333]: Checking size of /dev/sda9 Mar 17 18:54:29.752918 env[1374]: time="2025-03-17T18:54:29.752886223Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:54:29.772919 env[1374]: time="2025-03-17T18:54:29.772895638Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:54:29.773074 env[1374]: time="2025-03-17T18:54:29.773062982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:54:29.773934 env[1374]: time="2025-03-17T18:54:29.773918310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:54:29.773990 env[1374]: time="2025-03-17T18:54:29.773980059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774171 env[1374]: time="2025-03-17T18:54:29.774158961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774225 env[1374]: time="2025-03-17T18:54:29.774215269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774272 env[1374]: time="2025-03-17T18:54:29.774261656Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:54:29.774315 env[1374]: time="2025-03-17T18:54:29.774305405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774406 env[1374]: time="2025-03-17T18:54:29.774396501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774766 env[1374]: time="2025-03-17T18:54:29.774755141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774900 env[1374]: time="2025-03-17T18:54:29.774888087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:54:29.774952 env[1374]: time="2025-03-17T18:54:29.774941962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:54:29.775166 env[1374]: time="2025-03-17T18:54:29.775155543Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:54:29.775218 tar[1348]: linux-amd64/helm Mar 17 18:54:29.775353 env[1374]: time="2025-03-17T18:54:29.775203939Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:54:29.781391 extend-filesystems[1333]: Old size kept for /dev/sda9 Mar 17 18:54:29.781391 extend-filesystems[1333]: Found sr0 Mar 17 18:54:29.780882 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:54:29.781010 systemd[1]: Finished extend-filesystems.service. Mar 17 18:54:29.784585 systemd-logind[1338]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:54:29.784600 systemd-logind[1338]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:54:29.788358 systemd-logind[1338]: New seat seat0. Mar 17 18:54:29.789785 bash[1375]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:54:29.790743 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:54:29.799210 env[1374]: time="2025-03-17T18:54:29.799184266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:54:29.799298 env[1374]: time="2025-03-17T18:54:29.799288236Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:54:29.799359 env[1374]: time="2025-03-17T18:54:29.799349481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:54:29.799434 env[1374]: time="2025-03-17T18:54:29.799423733Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799486 env[1374]: time="2025-03-17T18:54:29.799477009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799538 env[1374]: time="2025-03-17T18:54:29.799528541Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799592 env[1374]: time="2025-03-17T18:54:29.799577306Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799650 env[1374]: time="2025-03-17T18:54:29.799634406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799717 env[1374]: time="2025-03-17T18:54:29.799707449Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799775 env[1374]: time="2025-03-17T18:54:29.799760227Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799824 env[1374]: time="2025-03-17T18:54:29.799815490Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.799872 env[1374]: time="2025-03-17T18:54:29.799862796Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:54:29.800025 env[1374]: time="2025-03-17T18:54:29.800016049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:54:29.802471 env[1374]: time="2025-03-17T18:54:29.802458301Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:54:29.802896 env[1374]: time="2025-03-17T18:54:29.802884129Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:54:29.802963 env[1374]: time="2025-03-17T18:54:29.802952299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803022 env[1374]: time="2025-03-17T18:54:29.803011766Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:54:29.803103 env[1374]: time="2025-03-17T18:54:29.803084834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803165 env[1374]: time="2025-03-17T18:54:29.803155134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803228 env[1374]: time="2025-03-17T18:54:29.803205453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803278 env[1374]: time="2025-03-17T18:54:29.803268761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803324 env[1374]: time="2025-03-17T18:54:29.803314321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803385 env[1374]: time="2025-03-17T18:54:29.803372534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803469 env[1374]: time="2025-03-17T18:54:29.803458129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803515 env[1374]: time="2025-03-17T18:54:29.803505554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803575 env[1374]: time="2025-03-17T18:54:29.803566085Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:54:29.803704 env[1374]: time="2025-03-17T18:54:29.803694494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803758 env[1374]: time="2025-03-17T18:54:29.803746286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803830 env[1374]: time="2025-03-17T18:54:29.803820075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.803881 env[1374]: time="2025-03-17T18:54:29.803871641Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:54:29.803942 env[1374]: time="2025-03-17T18:54:29.803925552Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:54:29.803989 env[1374]: time="2025-03-17T18:54:29.803980438Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:54:29.804040 env[1374]: time="2025-03-17T18:54:29.804029949Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:54:29.804114 env[1374]: time="2025-03-17T18:54:29.804104933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:54:29.804308 env[1374]: time="2025-03-17T18:54:29.804278035Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.804440250Z" level=info msg="Connect containerd service" Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.804477972Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.804883541Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.805118160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.805164692Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.805528031Z" level=info msg="Start subscribing containerd event" Mar 17 18:54:29.806226 env[1374]: time="2025-03-17T18:54:29.805552961Z" level=info msg="Start recovering state" Mar 17 18:54:29.805249 systemd[1]: Started containerd.service. Mar 17 18:54:29.806795 env[1374]: time="2025-03-17T18:54:29.805594681Z" level=info msg="Start event monitor" Mar 17 18:54:29.806852 env[1374]: time="2025-03-17T18:54:29.806841831Z" level=info msg="Start snapshots syncer" Mar 17 18:54:29.806901 env[1374]: time="2025-03-17T18:54:29.806891006Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:54:29.806983 env[1374]: time="2025-03-17T18:54:29.806974352Z" level=info msg="Start streaming server" Mar 17 18:54:29.814678 kernel: NET: Registered PF_VSOCK protocol family Mar 17 18:54:29.815524 update_engine[1339]: I0317 18:54:29.814980 1339 main.cc:92] Flatcar Update Engine starting Mar 17 18:54:29.819372 dbus-daemon[1330]: [system] SELinux support is enabled Mar 17 18:54:29.819489 systemd[1]: Started dbus.service. Mar 17 18:54:29.820846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:54:29.820860 systemd[1]: Reached target system-config.target. Mar 17 18:54:29.820986 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:54:29.820996 systemd[1]: Reached target user-config.target. Mar 17 18:54:29.821687 env[1374]: time="2025-03-17T18:54:29.821668972Z" level=info msg="containerd successfully booted in 0.069291s" Mar 17 18:54:29.827447 systemd[1]: Started systemd-logind.service. Mar 17 18:54:29.827475 dbus-daemon[1330]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:54:29.829014 systemd[1]: Started update-engine.service. Mar 17 18:54:29.830550 systemd[1]: Started locksmithd.service. Mar 17 18:54:29.832116 update_engine[1339]: I0317 18:54:29.832046 1339 update_check_scheduler.cc:74] Next update check in 5m31s Mar 17 18:54:30.058751 systemd-networkd[1121]: ens192: Gained IPv6LL Mar 17 18:54:30.060066 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:54:30.060375 systemd[1]: Reached target network-online.target. Mar 17 18:54:30.061891 systemd[1]: Starting kubelet.service... Mar 17 18:54:30.154707 tar[1348]: linux-amd64/LICENSE Mar 17 18:54:30.154796 tar[1348]: linux-amd64/README.md Mar 17 18:54:30.157657 systemd[1]: Finished prepare-helm.service. Mar 17 18:54:30.332506 locksmithd[1405]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:54:31.266596 sshd_keygen[1356]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:54:31.281695 systemd[1]: Finished sshd-keygen.service. Mar 17 18:54:31.282963 systemd[1]: Starting issuegen.service... Mar 17 18:54:31.286910 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:54:31.287037 systemd[1]: Finished issuegen.service. Mar 17 18:54:31.288242 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:54:31.292480 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:54:31.293480 systemd[1]: Started getty@tty1.service. Mar 17 18:54:31.294383 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:54:31.294587 systemd[1]: Reached target getty.target. Mar 17 18:54:31.394284 systemd[1]: Started kubelet.service. Mar 17 18:54:31.394641 systemd[1]: Reached target multi-user.target. Mar 17 18:54:31.395778 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:54:31.401557 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:54:31.401811 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:54:31.402062 systemd[1]: Startup finished in 6.139s (kernel) + 7.239s (userspace) = 13.378s. Mar 17 18:54:31.426175 login[1477]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:54:31.427950 login[1479]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:54:31.434865 systemd[1]: Created slice user-500.slice. Mar 17 18:54:31.435497 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:54:31.436700 systemd-logind[1338]: New session 1 of user core. Mar 17 18:54:31.438899 systemd-logind[1338]: New session 2 of user core. Mar 17 18:54:31.449877 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:54:31.450853 systemd[1]: Starting user@500.service... Mar 17 18:54:31.460703 (systemd)[1491]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:31.524850 systemd[1491]: Queued start job for default target default.target. Mar 17 18:54:31.525222 systemd[1491]: Reached target paths.target. Mar 17 18:54:31.525292 systemd[1491]: Reached target sockets.target. Mar 17 18:54:31.525356 systemd[1491]: Reached target timers.target. Mar 17 18:54:31.525414 systemd[1491]: Reached target basic.target. Mar 17 18:54:31.525532 systemd[1]: Started user@500.service. Mar 17 18:54:31.526168 systemd[1]: Started session-1.scope. Mar 17 18:54:31.526562 systemd[1]: Started session-2.scope. Mar 17 18:54:31.526864 systemd[1491]: Reached target default.target. Mar 17 18:54:31.527016 systemd[1491]: Startup finished in 62ms. Mar 17 18:54:32.330765 kubelet[1485]: E0317 18:54:32.330723 1485 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:54:32.332118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:54:32.332204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:54:42.582951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:54:42.583097 systemd[1]: Stopped kubelet.service. Mar 17 18:54:42.584431 systemd[1]: Starting kubelet.service... Mar 17 18:54:42.634941 systemd[1]: Started kubelet.service. Mar 17 18:54:42.715971 kubelet[1527]: E0317 18:54:42.715944 1527 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:54:42.718762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:54:42.718868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:54:52.745843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:54:52.745970 systemd[1]: Stopped kubelet.service. Mar 17 18:54:52.747083 systemd[1]: Starting kubelet.service... Mar 17 18:54:53.065242 systemd[1]: Started kubelet.service. Mar 17 18:54:53.103564 kubelet[1542]: E0317 18:54:53.103541 1542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:54:53.104714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:54:53.104800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:55:03.245954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:55:03.246112 systemd[1]: Stopped kubelet.service. Mar 17 18:55:03.247499 systemd[1]: Starting kubelet.service... Mar 17 18:55:03.418597 systemd[1]: Started kubelet.service. Mar 17 18:55:03.448820 kubelet[1557]: E0317 18:55:03.448798 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:55:03.449890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:55:03.449972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:55:09.870068 systemd[1]: Created slice system-sshd.slice. Mar 17 18:55:09.871381 systemd[1]: Started sshd@0-139.178.70.99:22-139.178.68.195:43670.service. Mar 17 18:55:09.916182 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 43670 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:09.916940 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:09.919785 systemd[1]: Started session-3.scope. Mar 17 18:55:09.920007 systemd-logind[1338]: New session 3 of user core. Mar 17 18:55:09.966609 systemd[1]: Started sshd@1-139.178.70.99:22-139.178.68.195:43680.service. Mar 17 18:55:10.006216 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 43680 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:10.007037 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:10.009940 systemd[1]: Started session-4.scope. Mar 17 18:55:10.010172 systemd-logind[1338]: New session 4 of user core. Mar 17 18:55:10.059166 sshd[1569]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:10.060602 systemd[1]: Started sshd@2-139.178.70.99:22-139.178.68.195:43688.service. Mar 17 18:55:10.062222 systemd-logind[1338]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:55:10.062265 systemd[1]: sshd@1-139.178.70.99:22-139.178.68.195:43680.service: Deactivated successfully. Mar 17 18:55:10.062669 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:55:10.062920 systemd-logind[1338]: Removed session 4. Mar 17 18:55:10.096377 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 43688 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:10.097261 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:10.099690 systemd-logind[1338]: New session 5 of user core. Mar 17 18:55:10.099971 systemd[1]: Started session-5.scope. Mar 17 18:55:10.147645 sshd[1574]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:10.148163 systemd[1]: Started sshd@3-139.178.70.99:22-139.178.68.195:43700.service. Mar 17 18:55:10.149724 systemd[1]: sshd@2-139.178.70.99:22-139.178.68.195:43688.service: Deactivated successfully. Mar 17 18:55:10.150421 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:55:10.150701 systemd-logind[1338]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:55:10.151357 systemd-logind[1338]: Removed session 5. Mar 17 18:55:10.182803 sshd[1581]: Accepted publickey for core from 139.178.68.195 port 43700 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:10.183586 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:10.186250 systemd[1]: Started session-6.scope. Mar 17 18:55:10.186510 systemd-logind[1338]: New session 6 of user core. Mar 17 18:55:10.236312 sshd[1581]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:10.237920 systemd[1]: Started sshd@4-139.178.70.99:22-139.178.68.195:43702.service. Mar 17 18:55:10.241161 systemd[1]: sshd@3-139.178.70.99:22-139.178.68.195:43700.service: Deactivated successfully. Mar 17 18:55:10.241507 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:55:10.242187 systemd-logind[1338]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:55:10.242736 systemd-logind[1338]: Removed session 6. Mar 17 18:55:10.272946 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 43702 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:10.273600 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:10.276285 systemd[1]: Started session-7.scope. Mar 17 18:55:10.276453 systemd-logind[1338]: New session 7 of user core. Mar 17 18:55:10.333868 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:55:10.333996 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:55:10.339774 dbus-daemon[1330]: н\xb5\u001f\xabU: received setenforce notice (enforcing=1663772096) Mar 17 18:55:10.340696 sudo[1594]: pam_unix(sudo:session): session closed for user root Mar 17 18:55:10.342693 sshd[1588]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:10.343734 systemd[1]: Started sshd@5-139.178.70.99:22-139.178.68.195:43706.service. Mar 17 18:55:10.347612 systemd[1]: sshd@4-139.178.70.99:22-139.178.68.195:43702.service: Deactivated successfully. Mar 17 18:55:10.348119 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:55:10.349143 systemd-logind[1338]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:55:10.350101 systemd-logind[1338]: Removed session 7. Mar 17 18:55:10.378451 sshd[1596]: Accepted publickey for core from 139.178.68.195 port 43706 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:10.379252 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:10.381821 systemd[1]: Started session-8.scope. Mar 17 18:55:10.382064 systemd-logind[1338]: New session 8 of user core. Mar 17 18:55:10.431507 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:55:10.431631 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:55:10.433533 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 17 18:55:10.436243 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 17 18:55:10.436502 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:55:10.441936 systemd[1]: Stopping audit-rules.service... Mar 17 18:55:10.442000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:55:10.443828 kernel: kauditd_printk_skb: 147 callbacks suppressed Mar 17 18:55:10.443858 kernel: audit: type=1305 audit(1742237710.442:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:55:10.442000 audit[1606]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee7c51aa0 a2=420 a3=0 items=0 ppid=1 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.448897 kernel: audit: type=1300 audit(1742237710.442:157): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee7c51aa0 a2=420 a3=0 items=0 ppid=1 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.448925 kernel: audit: type=1327 audit(1742237710.442:157): proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:55:10.442000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:55:10.450007 auditctl[1606]: No rules Mar 17 18:55:10.450161 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:55:10.450276 systemd[1]: Stopped audit-rules.service. Mar 17 18:55:10.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.451328 systemd[1]: Starting audit-rules.service... Mar 17 18:55:10.454680 kernel: audit: type=1131 audit(1742237710.449:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.462778 augenrules[1624]: No rules Mar 17 18:55:10.463195 systemd[1]: Finished audit-rules.service. Mar 17 18:55:10.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.463601 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 17 18:55:10.466677 kernel: audit: type=1130 audit(1742237710.462:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.468153 systemd[1]: Started sshd@6-139.178.70.99:22-139.178.68.195:43710.service. Mar 17 18:55:10.468377 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:10.477194 kernel: audit: type=1106 audit(1742237710.462:160): pid=1602 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.477228 kernel: audit: type=1104 audit(1742237710.463:161): pid=1602 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.477243 kernel: audit: type=1130 audit(1742237710.467:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.99:22-139.178.68.195:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.462000 audit[1602]: USER_END pid=1602 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.463000 audit[1602]: CRED_DISP pid=1602 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.99:22-139.178.68.195:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.477000 audit[1596]: USER_END pid=1596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.478435 systemd[1]: sshd@5-139.178.70.99:22-139.178.68.195:43706.service: Deactivated successfully. Mar 17 18:55:10.478875 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:55:10.482432 systemd-logind[1338]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:55:10.477000 audit[1596]: CRED_DISP pid=1596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.485391 kernel: audit: type=1106 audit(1742237710.477:163): pid=1596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.485415 kernel: audit: type=1104 audit(1742237710.477:164): pid=1596 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.99:22-139.178.68.195:43706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.485780 systemd-logind[1338]: Removed session 8. Mar 17 18:55:10.502000 audit[1629]: USER_ACCT pid=1629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.503322 sshd[1629]: Accepted publickey for core from 139.178.68.195 port 43710 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:55:10.503000 audit[1629]: CRED_ACQ pid=1629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.503000 audit[1629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2a419860 a2=3 a3=0 items=0 ppid=1 pid=1629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.503000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:55:10.504331 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:55:10.507006 systemd[1]: Started session-9.scope. Mar 17 18:55:10.507199 systemd-logind[1338]: New session 9 of user core. Mar 17 18:55:10.509000 audit[1629]: USER_START pid=1629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.510000 audit[1634]: CRED_ACQ pid=1634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:10.555507 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:55:10.555644 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:55:10.554000 audit[1635]: USER_ACCT pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.554000 audit[1635]: CRED_REFR pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.556000 audit[1635]: USER_START pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.568592 systemd[1]: Starting docker.service... Mar 17 18:55:10.592145 env[1646]: time="2025-03-17T18:55:10.592115331Z" level=info msg="Starting up" Mar 17 18:55:10.593295 env[1646]: time="2025-03-17T18:55:10.593277863Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:55:10.593295 env[1646]: time="2025-03-17T18:55:10.593290777Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:55:10.593350 env[1646]: time="2025-03-17T18:55:10.593304611Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:55:10.593350 env[1646]: time="2025-03-17T18:55:10.593311150Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:55:10.594395 env[1646]: time="2025-03-17T18:55:10.594384907Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:55:10.594444 env[1646]: time="2025-03-17T18:55:10.594434754Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:55:10.594494 env[1646]: time="2025-03-17T18:55:10.594483144Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:55:10.594533 env[1646]: time="2025-03-17T18:55:10.594524962Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:55:10.597726 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3615969431-merged.mount: Deactivated successfully. Mar 17 18:55:10.618398 env[1646]: time="2025-03-17T18:55:10.618381626Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:55:10.618493 env[1646]: time="2025-03-17T18:55:10.618482133Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:55:10.618642 env[1646]: time="2025-03-17T18:55:10.618632366Z" level=info msg="Loading containers: start." Mar 17 18:55:10.659000 audit[1676]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.659000 audit[1676]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc139c0810 a2=0 a3=7ffc139c07fc items=0 ppid=1646 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.659000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Mar 17 18:55:10.660000 audit[1678]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.660000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdb03c9ec0 a2=0 a3=7ffdb03c9eac items=0 ppid=1646 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.660000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Mar 17 18:55:10.661000 audit[1680]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.661000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd54f312a0 a2=0 a3=7ffd54f3128c items=0 ppid=1646 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.661000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:55:10.663000 audit[1682]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.663000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3390d3f0 a2=0 a3=7fff3390d3dc items=0 ppid=1646 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.663000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:55:10.664000 audit[1684]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.664000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd54504930 a2=0 a3=7ffd5450491c items=0 ppid=1646 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.664000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Mar 17 18:55:10.676000 audit[1689]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.676000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8cd27940 a2=0 a3=7ffc8cd2792c items=0 ppid=1646 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.676000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Mar 17 18:55:10.679000 audit[1691]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.679000 audit[1691]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc411e620 a2=0 a3=7fffc411e60c items=0 ppid=1646 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.679000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Mar 17 18:55:10.680000 audit[1693]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.680000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd8b2e6c90 a2=0 a3=7ffd8b2e6c7c items=0 ppid=1646 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.680000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Mar 17 18:55:10.681000 audit[1695]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.681000 audit[1695]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcbb148940 a2=0 a3=7ffcbb14892c items=0 ppid=1646 pid=1695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.681000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:55:10.686000 audit[1699]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.686000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff5c1cfd70 a2=0 a3=7fff5c1cfd5c items=0 ppid=1646 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.686000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:55:10.690000 audit[1700]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.690000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffedbacd750 a2=0 a3=7ffedbacd73c items=0 ppid=1646 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.690000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:55:10.698682 kernel: Initializing XFRM netlink socket Mar 17 18:55:10.720939 env[1646]: time="2025-03-17T18:55:10.720916074Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:55:10.734000 audit[1708]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.734000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffdaed6bf70 a2=0 a3=7ffdaed6bf5c items=0 ppid=1646 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.734000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Mar 17 18:55:10.742000 audit[1711]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.742000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffca5ee2db0 a2=0 a3=7ffca5ee2d9c items=0 ppid=1646 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.742000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Mar 17 18:55:10.744000 audit[1714]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.744000 audit[1714]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe082399a0 a2=0 a3=7ffe0823998c items=0 ppid=1646 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Mar 17 18:55:10.745000 audit[1716]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.745000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffff03bb6c0 a2=0 a3=7ffff03bb6ac items=0 ppid=1646 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.745000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Mar 17 18:55:10.747000 audit[1718]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.747000 audit[1718]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe0d180d50 a2=0 a3=7ffe0d180d3c items=0 ppid=1646 pid=1718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.747000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Mar 17 18:55:10.748000 audit[1720]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.748000 audit[1720]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc08ba6180 a2=0 a3=7ffc08ba616c items=0 ppid=1646 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.748000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Mar 17 18:55:10.749000 audit[1722]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.749000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd74abbfd0 a2=0 a3=7ffd74abbfbc items=0 ppid=1646 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.749000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Mar 17 18:55:10.876000 audit[1725]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.876000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcd99ba7b0 a2=0 a3=7ffcd99ba79c items=0 ppid=1646 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.876000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Mar 17 18:55:10.877000 audit[1727]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.877000 audit[1727]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe177f6360 a2=0 a3=7ffe177f634c items=0 ppid=1646 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.877000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:55:10.879000 audit[1729]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.879000 audit[1729]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffec78bc1b0 a2=0 a3=7ffec78bc19c items=0 ppid=1646 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.879000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:55:10.880000 audit[1731]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.880000 audit[1731]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffd8f99950 a2=0 a3=7fffd8f9993c items=0 ppid=1646 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.880000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Mar 17 18:55:10.881758 systemd-networkd[1121]: docker0: Link UP Mar 17 18:55:10.885000 audit[1735]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.885000 audit[1735]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebc866170 a2=0 a3=7ffebc86615c items=0 ppid=1646 pid=1735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.885000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:55:10.889000 audit[1736]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:10.889000 audit[1736]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc16b299d0 a2=0 a3=7ffc16b299bc items=0 ppid=1646 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:10.889000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:55:10.890595 env[1646]: time="2025-03-17T18:55:10.890573561Z" level=info msg="Loading containers: done." Mar 17 18:55:10.897660 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1914220117-merged.mount: Deactivated successfully. Mar 17 18:55:10.902528 env[1646]: time="2025-03-17T18:55:10.902508390Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:55:10.902707 env[1646]: time="2025-03-17T18:55:10.902696631Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:55:10.902801 env[1646]: time="2025-03-17T18:55:10.902792540Z" level=info msg="Daemon has completed initialization" Mar 17 18:55:10.909732 systemd[1]: Started docker.service. Mar 17 18:55:10.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:10.914727 env[1646]: time="2025-03-17T18:55:10.914695696Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:55:11.726125 env[1374]: time="2025-03-17T18:55:11.726096265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:55:12.217585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164741055.mount: Deactivated successfully. Mar 17 18:55:13.487932 env[1374]: time="2025-03-17T18:55:13.487900923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:13.488729 env[1374]: time="2025-03-17T18:55:13.488713309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:13.489784 env[1374]: time="2025-03-17T18:55:13.489772903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:13.490717 env[1374]: time="2025-03-17T18:55:13.490702553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:13.491229 env[1374]: time="2025-03-17T18:55:13.491214732Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:55:13.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:13.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:13.495750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:55:13.495863 systemd[1]: Stopped kubelet.service. Mar 17 18:55:13.496940 systemd[1]: Starting kubelet.service... Mar 17 18:55:13.498472 env[1374]: time="2025-03-17T18:55:13.498452519Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:55:13.547195 systemd[1]: Started kubelet.service. Mar 17 18:55:13.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:13.594161 kubelet[1786]: E0317 18:55:13.594122 1786 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:55:13.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:55:13.595246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:55:13.595336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:55:15.027890 update_engine[1339]: I0317 18:55:15.027689 1339 update_attempter.cc:509] Updating boot flags... Mar 17 18:55:15.236658 env[1374]: time="2025-03-17T18:55:15.236242228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:15.236935 env[1374]: time="2025-03-17T18:55:15.236837125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:15.238438 env[1374]: time="2025-03-17T18:55:15.238044429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:15.238993 env[1374]: time="2025-03-17T18:55:15.238980844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:15.239432 env[1374]: time="2025-03-17T18:55:15.239417733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:55:15.244724 env[1374]: time="2025-03-17T18:55:15.244707196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:55:16.682261 env[1374]: time="2025-03-17T18:55:16.682216179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:16.684770 env[1374]: time="2025-03-17T18:55:16.684755287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:16.691094 env[1374]: time="2025-03-17T18:55:16.691071917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:16.698812 env[1374]: time="2025-03-17T18:55:16.698797808Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:16.699202 env[1374]: time="2025-03-17T18:55:16.699186342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:55:16.705215 env[1374]: time="2025-03-17T18:55:16.705192649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:55:17.669350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136554802.mount: Deactivated successfully. Mar 17 18:55:18.080390 env[1374]: time="2025-03-17T18:55:18.080352968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:18.081241 env[1374]: time="2025-03-17T18:55:18.081218973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:18.082140 env[1374]: time="2025-03-17T18:55:18.082124410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:18.083015 env[1374]: time="2025-03-17T18:55:18.082990231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:18.083385 env[1374]: time="2025-03-17T18:55:18.083369467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:55:18.090208 env[1374]: time="2025-03-17T18:55:18.090187850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:55:18.647225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721142634.mount: Deactivated successfully. Mar 17 18:55:19.395845 env[1374]: time="2025-03-17T18:55:19.395803388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.397353 env[1374]: time="2025-03-17T18:55:19.397331335Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.399047 env[1374]: time="2025-03-17T18:55:19.399019704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.400702 env[1374]: time="2025-03-17T18:55:19.400675560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.401351 env[1374]: time="2025-03-17T18:55:19.401324794Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:55:19.409421 env[1374]: time="2025-03-17T18:55:19.409385027Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:55:19.824083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456244934.mount: Deactivated successfully. Mar 17 18:55:19.825987 env[1374]: time="2025-03-17T18:55:19.825961527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.826446 env[1374]: time="2025-03-17T18:55:19.826434242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.827251 env[1374]: time="2025-03-17T18:55:19.827239726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.827990 env[1374]: time="2025-03-17T18:55:19.827977612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:19.828306 env[1374]: time="2025-03-17T18:55:19.828292551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:55:19.833975 env[1374]: time="2025-03-17T18:55:19.833952870Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:55:20.246997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445420676.mount: Deactivated successfully. Mar 17 18:55:22.375310 env[1374]: time="2025-03-17T18:55:22.375278987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:22.388161 env[1374]: time="2025-03-17T18:55:22.388142297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:22.393623 env[1374]: time="2025-03-17T18:55:22.393606561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:22.401499 env[1374]: time="2025-03-17T18:55:22.401481816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:22.402210 env[1374]: time="2025-03-17T18:55:22.402191537Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:55:23.745792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:55:23.745928 systemd[1]: Stopped kubelet.service. Mar 17 18:55:23.747281 systemd[1]: Starting kubelet.service... Mar 17 18:55:23.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:23.748290 kernel: kauditd_printk_skb: 88 callbacks suppressed Mar 17 18:55:23.748330 kernel: audit: type=1130 audit(1742237723.744:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:23.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:23.753303 kernel: audit: type=1131 audit(1742237723.744:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:24.408763 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:55:24.408806 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:55:24.408968 systemd[1]: Stopped kubelet.service. Mar 17 18:55:24.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:55:24.411904 kernel: audit: type=1130 audit(1742237724.407:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:55:24.412950 systemd[1]: Starting kubelet.service... Mar 17 18:55:24.423478 systemd[1]: Reloading. Mar 17 18:55:24.486540 /usr/lib/systemd/system-generators/torcx-generator[1926]: time="2025-03-17T18:55:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:55:24.486557 /usr/lib/systemd/system-generators/torcx-generator[1926]: time="2025-03-17T18:55:24Z" level=info msg="torcx already run" Mar 17 18:55:24.525900 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:55:24.525915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:55:24.538869 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:55:24.652972 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:55:24.653045 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:55:24.653306 systemd[1]: Stopped kubelet.service. Mar 17 18:55:24.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:55:24.654924 systemd[1]: Starting kubelet.service... Mar 17 18:55:24.657676 kernel: audit: type=1130 audit(1742237724.651:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:55:25.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:25.036680 systemd[1]: Started kubelet.service. Mar 17 18:55:25.040679 kernel: audit: type=1130 audit(1742237725.036:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:25.161159 kubelet[2001]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:55:25.161557 kubelet[2001]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:55:25.161657 kubelet[2001]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:55:25.161772 kubelet[2001]: I0317 18:55:25.161746 2001 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:55:25.467916 kubelet[2001]: I0317 18:55:25.467859 2001 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:55:25.467916 kubelet[2001]: I0317 18:55:25.467876 2001 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:55:25.468178 kubelet[2001]: I0317 18:55:25.468165 2001 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:55:25.629008 kubelet[2001]: I0317 18:55:25.628980 2001 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:55:25.650053 kubelet[2001]: E0317 18:55:25.650035 2001 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.707741 kubelet[2001]: I0317 18:55:25.707717 2001 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:55:25.711821 kubelet[2001]: I0317 18:55:25.711642 2001 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:55:25.712111 kubelet[2001]: I0317 18:55:25.711893 2001 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:55:25.712243 kubelet[2001]: I0317 18:55:25.712231 2001 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:55:25.712306 kubelet[2001]: I0317 18:55:25.712297 2001 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:55:25.712457 kubelet[2001]: I0317 18:55:25.712447 2001 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:55:25.713353 kubelet[2001]: I0317 18:55:25.713343 2001 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:55:25.713679 kubelet[2001]: I0317 18:55:25.713659 2001 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:55:25.713759 kubelet[2001]: I0317 18:55:25.713748 2001 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:55:25.713835 kubelet[2001]: I0317 18:55:25.713824 2001 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:55:25.718343 kubelet[2001]: W0317 18:55:25.713728 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.718460 kubelet[2001]: E0317 18:55:25.718437 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.719079 kubelet[2001]: W0317 18:55:25.719054 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.719160 kubelet[2001]: E0317 18:55:25.719148 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.719812 kubelet[2001]: I0317 18:55:25.719799 2001 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:55:25.722372 kubelet[2001]: I0317 18:55:25.722361 2001 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:55:25.722481 kubelet[2001]: W0317 18:55:25.722470 2001 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:55:25.722953 kubelet[2001]: I0317 18:55:25.722943 2001 server.go:1264] "Started kubelet" Mar 17 18:55:25.731924 kubelet[2001]: I0317 18:55:25.731784 2001 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:55:25.733085 kubelet[2001]: I0317 18:55:25.732688 2001 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:55:25.733328 kubelet[2001]: I0317 18:55:25.733294 2001 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:55:25.733535 kubelet[2001]: I0317 18:55:25.733525 2001 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:55:25.733746 kubelet[2001]: E0317 18:55:25.733677 2001 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.99:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dabfb13ffe369 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:55:25.722927977 +0000 UTC m=+0.680403281,LastTimestamp:2025-03-17 18:55:25.722927977 +0000 UTC m=+0.680403281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:55:25.734000 audit[2001]: AVC avc: denied { mac_admin } for pid=2001 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:25.735711 kubelet[2001]: I0317 18:55:25.735699 2001 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:55:25.735789 kubelet[2001]: I0317 18:55:25.735778 2001 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:55:25.735899 kubelet[2001]: I0317 18:55:25.735890 2001 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:55:25.734000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:25.738125 kubelet[2001]: I0317 18:55:25.738116 2001 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:55:25.738229 kubelet[2001]: I0317 18:55:25.738220 2001 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:55:25.738315 kubelet[2001]: I0317 18:55:25.738308 2001 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:55:25.739204 kernel: audit: type=1400 audit(1742237725.734:208): avc: denied { mac_admin } for pid=2001 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:25.739242 kernel: audit: type=1401 audit(1742237725.734:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:25.739260 kernel: audit: type=1300 audit(1742237725.734:208): arch=c000003e syscall=188 success=no exit=-22 a0=c000922a50 a1=c00092a1e0 a2=c000922a20 a3=25 items=0 ppid=1 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.734000 audit[2001]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000922a50 a1=c00092a1e0 a2=c000922a20 a3=25 items=0 ppid=1 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.741205 kubelet[2001]: W0317 18:55:25.741182 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.741270 kubelet[2001]: E0317 18:55:25.741260 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.741361 kubelet[2001]: E0317 18:55:25.741347 2001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="200ms" Mar 17 18:55:25.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:25.746477 kubelet[2001]: I0317 18:55:25.746464 2001 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:55:25.746608 kubelet[2001]: I0317 18:55:25.746597 2001 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:55:25.746868 kernel: audit: type=1327 audit(1742237725.734:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:25.734000 audit[2001]: AVC avc: denied { mac_admin } for pid=2001 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:25.747701 kubelet[2001]: I0317 18:55:25.747693 2001 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:55:25.749945 kernel: audit: type=1400 audit(1742237725.734:209): avc: denied { mac_admin } for pid=2001 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:25.734000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:25.734000 audit[2001]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00071ed00 a1=c00092a1f8 a2=c000922ae0 a3=25 items=0 ppid=1 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:25.738000 audit[2011]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.738000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff6b039f20 a2=0 a3=7fff6b039f0c items=0 ppid=2001 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:55:25.738000 audit[2012]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.738000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc55590c40 a2=0 a3=7ffc55590c2c items=0 ppid=2001 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:55:25.738000 audit[2014]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.738000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdae1741d0 a2=0 a3=7ffdae1741bc items=0 ppid=2001 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:55:25.742000 audit[2016]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.742000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe5a5a56e0 a2=0 a3=7ffe5a5a56cc items=0 ppid=2001 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:55:25.767901 kubelet[2001]: E0317 18:55:25.767884 2001 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:55:25.768000 audit[2022]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.768000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe59af9990 a2=0 a3=7ffe59af997c items=0 ppid=2001 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Mar 17 18:55:25.769486 kubelet[2001]: I0317 18:55:25.769470 2001 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:55:25.769719 kubelet[2001]: I0317 18:55:25.769707 2001 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:55:25.769719 kubelet[2001]: I0317 18:55:25.769716 2001 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:55:25.769778 kubelet[2001]: I0317 18:55:25.769735 2001 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:55:25.769000 audit[2024]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:25.769000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc91508e40 a2=0 a3=7ffc91508e2c items=0 ppid=2001 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:55:25.770818 kubelet[2001]: I0317 18:55:25.770749 2001 policy_none.go:49] "None policy: Start" Mar 17 18:55:25.771257 kubelet[2001]: I0317 18:55:25.771133 2001 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:55:25.771257 kubelet[2001]: I0317 18:55:25.771148 2001 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:55:25.771537 kubelet[2001]: I0317 18:55:25.771506 2001 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:55:25.771537 kubelet[2001]: I0317 18:55:25.771534 2001 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:55:25.771654 kubelet[2001]: I0317 18:55:25.771549 2001 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:55:25.771654 kubelet[2001]: E0317 18:55:25.771593 2001 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:55:25.771000 audit[2025]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.771000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeef0991b0 a2=0 a3=7ffeef09919c items=0 ppid=2001 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:55:25.772000 audit[2026]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:25.772000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa0fb1350 a2=0 a3=7fffa0fb133c items=0 ppid=2001 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:55:25.774486 kubelet[2001]: I0317 18:55:25.774476 2001 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:55:25.773000 audit[2001]: AVC avc: denied { mac_admin } for pid=2001 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:25.773000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:25.773000 audit[2001]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bcfb60 a1=c00014d698 a2=c000bcfb30 a3=25 items=0 ppid=1 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:25.774764 kubelet[2001]: I0317 18:55:25.774754 2001 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:55:25.774887 kubelet[2001]: I0317 18:55:25.774868 2001 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:55:25.774985 kubelet[2001]: I0317 18:55:25.774978 2001 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:55:25.775577 kubelet[2001]: W0317 18:55:25.775546 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.775642 kubelet[2001]: E0317 18:55:25.775634 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:25.775000 audit[2028]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.775000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb5d71990 a2=0 a3=7ffcb5d7197c items=0 ppid=2001 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:55:25.776000 audit[2029]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:25.776000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff967666a0 a2=0 a3=7fff9676668c items=0 ppid=2001 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:55:25.778953 kubelet[2001]: E0317 18:55:25.778942 2001 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:55:25.778000 audit[2030]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:25.778000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa8e26410 a2=0 a3=7fffa8e263fc items=0 ppid=2001 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:55:25.778000 audit[2031]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:25.778000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffee2edec70 a2=0 a3=7ffee2edec5c items=0 ppid=2001 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:25.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:55:25.839738 kubelet[2001]: I0317 18:55:25.839710 2001 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:55:25.839937 kubelet[2001]: E0317 18:55:25.839921 2001 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Mar 17 18:55:25.872183 kubelet[2001]: I0317 18:55:25.872151 2001 topology_manager.go:215] "Topology Admit Handler" podUID="91ab5aab2e8291cf9785bca624e4e79f" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:55:25.873114 kubelet[2001]: I0317 18:55:25.873098 2001 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:55:25.873907 kubelet[2001]: I0317 18:55:25.873894 2001 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:55:25.941708 kubelet[2001]: E0317 18:55:25.941686 2001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="400ms" Mar 17 18:55:26.039983 kubelet[2001]: I0317 18:55:26.039961 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:26.040111 kubelet[2001]: I0317 18:55:26.040101 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:55:26.040176 kubelet[2001]: I0317 18:55:26.040167 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:26.040233 kubelet[2001]: I0317 18:55:26.040225 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:26.040290 kubelet[2001]: I0317 18:55:26.040276 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:26.040342 kubelet[2001]: I0317 18:55:26.040334 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:26.040396 kubelet[2001]: I0317 18:55:26.040382 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91ab5aab2e8291cf9785bca624e4e79f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"91ab5aab2e8291cf9785bca624e4e79f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:26.040451 kubelet[2001]: I0317 18:55:26.040443 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91ab5aab2e8291cf9785bca624e4e79f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"91ab5aab2e8291cf9785bca624e4e79f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:26.040508 kubelet[2001]: I0317 18:55:26.040495 2001 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91ab5aab2e8291cf9785bca624e4e79f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"91ab5aab2e8291cf9785bca624e4e79f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:26.041356 kubelet[2001]: I0317 18:55:26.041340 2001 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:55:26.041612 kubelet[2001]: E0317 18:55:26.041602 2001 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Mar 17 18:55:26.178151 env[1374]: time="2025-03-17T18:55:26.178112569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:91ab5aab2e8291cf9785bca624e4e79f,Namespace:kube-system,Attempt:0,}" Mar 17 18:55:26.178392 env[1374]: time="2025-03-17T18:55:26.178109079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 18:55:26.181999 env[1374]: time="2025-03-17T18:55:26.181805737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:55:26.343408 kubelet[2001]: E0317 18:55:26.342949 2001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="800ms" Mar 17 18:55:26.443339 kubelet[2001]: I0317 18:55:26.443319 2001 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:55:26.443695 kubelet[2001]: E0317 18:55:26.443658 2001 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Mar 17 18:55:26.641516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023823470.mount: Deactivated successfully. Mar 17 18:55:26.644352 env[1374]: time="2025-03-17T18:55:26.644335734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.644862 env[1374]: time="2025-03-17T18:55:26.644850420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.645310 env[1374]: time="2025-03-17T18:55:26.645298685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.646783 env[1374]: time="2025-03-17T18:55:26.646771832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.648252 env[1374]: time="2025-03-17T18:55:26.648240104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.650250 env[1374]: time="2025-03-17T18:55:26.650234687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.650961 env[1374]: time="2025-03-17T18:55:26.650945766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.653991 env[1374]: time="2025-03-17T18:55:26.653975520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.655183 env[1374]: time="2025-03-17T18:55:26.655142501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.655674 env[1374]: time="2025-03-17T18:55:26.655651265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.656154 env[1374]: time="2025-03-17T18:55:26.656132938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.656661 env[1374]: time="2025-03-17T18:55:26.656638811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:26.669628 env[1374]: time="2025-03-17T18:55:26.669501915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:26.669628 env[1374]: time="2025-03-17T18:55:26.669520699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:26.669628 env[1374]: time="2025-03-17T18:55:26.669527086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:26.669785 env[1374]: time="2025-03-17T18:55:26.668328871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:26.669785 env[1374]: time="2025-03-17T18:55:26.668361617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:26.669785 env[1374]: time="2025-03-17T18:55:26.668370174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:26.669785 env[1374]: time="2025-03-17T18:55:26.668439081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7f34a4ebf0368df5fb0fde4e4f68ad3a4b97abdf3c998c65b70ac6087705ecb pid=2048 runtime=io.containerd.runc.v2 Mar 17 18:55:26.669943 env[1374]: time="2025-03-17T18:55:26.669919441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e17503d75887a51da3637ed250aa221543551a25736be3224b57f9ecc9b55eb4 pid=2049 runtime=io.containerd.runc.v2 Mar 17 18:55:26.677045 env[1374]: time="2025-03-17T18:55:26.677013104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:26.677142 env[1374]: time="2025-03-17T18:55:26.677033260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:26.677142 env[1374]: time="2025-03-17T18:55:26.677041233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:26.677142 env[1374]: time="2025-03-17T18:55:26.677105215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e5c56f93503e25bb4344a7f6df75d9af9ecf922fd3c59b31e5cc8dca33ff1ae pid=2081 runtime=io.containerd.runc.v2 Mar 17 18:55:26.726958 env[1374]: time="2025-03-17T18:55:26.726924741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:91ab5aab2e8291cf9785bca624e4e79f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7f34a4ebf0368df5fb0fde4e4f68ad3a4b97abdf3c998c65b70ac6087705ecb\"" Mar 17 18:55:26.729436 env[1374]: time="2025-03-17T18:55:26.729420698Z" level=info msg="CreateContainer within sandbox \"f7f34a4ebf0368df5fb0fde4e4f68ad3a4b97abdf3c998c65b70ac6087705ecb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:55:26.739406 env[1374]: time="2025-03-17T18:55:26.739386591Z" level=info msg="CreateContainer within sandbox \"f7f34a4ebf0368df5fb0fde4e4f68ad3a4b97abdf3c998c65b70ac6087705ecb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7867d451abcde73446a4b026f1c1ef353426d0ae20e4a1b57304aab20a189493\"" Mar 17 18:55:26.739835 env[1374]: time="2025-03-17T18:55:26.739821886Z" level=info msg="StartContainer for \"7867d451abcde73446a4b026f1c1ef353426d0ae20e4a1b57304aab20a189493\"" Mar 17 18:55:26.740054 env[1374]: time="2025-03-17T18:55:26.739910362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e17503d75887a51da3637ed250aa221543551a25736be3224b57f9ecc9b55eb4\"" Mar 17 18:55:26.741327 env[1374]: time="2025-03-17T18:55:26.741314490Z" level=info msg="CreateContainer within sandbox \"e17503d75887a51da3637ed250aa221543551a25736be3224b57f9ecc9b55eb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:55:26.747655 env[1374]: time="2025-03-17T18:55:26.747634205Z" level=info msg="CreateContainer within sandbox \"e17503d75887a51da3637ed250aa221543551a25736be3224b57f9ecc9b55eb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fb0f4714580c1ae5343b719d34a74a794f804d63e6986db78483b4be540557d1\"" Mar 17 18:55:26.747963 env[1374]: time="2025-03-17T18:55:26.747951411Z" level=info msg="StartContainer for \"fb0f4714580c1ae5343b719d34a74a794f804d63e6986db78483b4be540557d1\"" Mar 17 18:55:26.761237 env[1374]: time="2025-03-17T18:55:26.761213327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e5c56f93503e25bb4344a7f6df75d9af9ecf922fd3c59b31e5cc8dca33ff1ae\"" Mar 17 18:55:26.770463 env[1374]: time="2025-03-17T18:55:26.770444416Z" level=info msg="CreateContainer within sandbox \"7e5c56f93503e25bb4344a7f6df75d9af9ecf922fd3c59b31e5cc8dca33ff1ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:55:26.802453 env[1374]: time="2025-03-17T18:55:26.802430087Z" level=info msg="CreateContainer within sandbox \"7e5c56f93503e25bb4344a7f6df75d9af9ecf922fd3c59b31e5cc8dca33ff1ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"89af82fcd707b90aff62e70c123875926e6f65c09834600c784584de18da1b61\"" Mar 17 18:55:26.803044 env[1374]: time="2025-03-17T18:55:26.803032253Z" level=info msg="StartContainer for \"89af82fcd707b90aff62e70c123875926e6f65c09834600c784584de18da1b61\"" Mar 17 18:55:26.814903 env[1374]: time="2025-03-17T18:55:26.814879482Z" level=info msg="StartContainer for \"7867d451abcde73446a4b026f1c1ef353426d0ae20e4a1b57304aab20a189493\" returns successfully" Mar 17 18:55:26.836448 env[1374]: time="2025-03-17T18:55:26.836413533Z" level=info msg="StartContainer for \"fb0f4714580c1ae5343b719d34a74a794f804d63e6986db78483b4be540557d1\" returns successfully" Mar 17 18:55:26.844659 kubelet[2001]: W0317 18:55:26.844592 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:26.844659 kubelet[2001]: E0317 18:55:26.844645 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:26.861136 env[1374]: time="2025-03-17T18:55:26.861108662Z" level=info msg="StartContainer for \"89af82fcd707b90aff62e70c123875926e6f65c09834600c784584de18da1b61\" returns successfully" Mar 17 18:55:27.047888 kubelet[2001]: W0317 18:55:27.047828 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:27.047888 kubelet[2001]: E0317 18:55:27.047867 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:27.140453 kubelet[2001]: W0317 18:55:27.140396 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:27.140453 kubelet[2001]: E0317 18:55:27.140437 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:27.144713 kubelet[2001]: E0317 18:55:27.144686 2001 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="1.6s" Mar 17 18:55:27.245135 kubelet[2001]: I0317 18:55:27.244944 2001 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:55:27.245135 kubelet[2001]: E0317 18:55:27.245116 2001 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Mar 17 18:55:27.289683 kubelet[2001]: W0317 18:55:27.289618 2001 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:27.289683 kubelet[2001]: E0317 18:55:27.289652 2001 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:27.725220 kubelet[2001]: E0317 18:55:27.725187 2001 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Mar 17 18:55:28.778962 kubelet[2001]: E0317 18:55:28.778937 2001 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:55:28.846881 kubelet[2001]: I0317 18:55:28.846859 2001 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:55:28.864957 kubelet[2001]: I0317 18:55:28.864934 2001 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:55:28.870494 kubelet[2001]: E0317 18:55:28.870465 2001 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:55:28.971282 kubelet[2001]: E0317 18:55:28.971252 2001 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:55:29.072124 kubelet[2001]: E0317 18:55:29.072052 2001 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:55:29.172726 kubelet[2001]: E0317 18:55:29.172698 2001 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:55:29.273418 kubelet[2001]: E0317 18:55:29.273394 2001 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:55:29.374028 kubelet[2001]: E0317 18:55:29.373964 2001 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:55:29.721982 kubelet[2001]: I0317 18:55:29.721922 2001 apiserver.go:52] "Watching apiserver" Mar 17 18:55:29.738491 kubelet[2001]: I0317 18:55:29.738469 2001 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:55:30.605735 systemd[1]: Reloading. Mar 17 18:55:30.665730 /usr/lib/systemd/system-generators/torcx-generator[2290]: time="2025-03-17T18:55:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:55:30.665948 /usr/lib/systemd/system-generators/torcx-generator[2290]: time="2025-03-17T18:55:30Z" level=info msg="torcx already run" Mar 17 18:55:30.735012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:55:30.735023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:55:30.749299 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:55:30.799573 kubelet[2001]: I0317 18:55:30.799559 2001 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:55:30.800810 systemd[1]: Stopping kubelet.service... Mar 17 18:55:30.822025 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:55:30.822170 systemd[1]: Stopped kubelet.service. Mar 17 18:55:30.825028 kernel: kauditd_printk_skb: 43 callbacks suppressed Mar 17 18:55:30.825073 kernel: audit: type=1131 audit(1742237730.821:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:30.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:30.823470 systemd[1]: Starting kubelet.service... Mar 17 18:55:31.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:31.975735 kernel: audit: type=1130 audit(1742237731.971:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:31.971957 systemd[1]: Started kubelet.service. Mar 17 18:55:32.060393 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:55:32.060393 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:55:32.060393 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:55:32.060677 kubelet[2365]: I0317 18:55:32.060423 2365 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:55:32.062978 kubelet[2365]: I0317 18:55:32.062963 2365 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:55:32.062978 kubelet[2365]: I0317 18:55:32.062975 2365 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:55:32.063637 kubelet[2365]: I0317 18:55:32.063083 2365 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:55:32.063801 kubelet[2365]: I0317 18:55:32.063788 2365 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:55:32.077645 kubelet[2365]: I0317 18:55:32.077488 2365 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:55:32.083408 kubelet[2365]: I0317 18:55:32.083398 2365 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:55:32.097910 kubelet[2365]: I0317 18:55:32.097877 2365 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:55:32.098024 kubelet[2365]: I0317 18:55:32.097908 2365 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:55:32.098091 kubelet[2365]: I0317 18:55:32.098031 2365 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:55:32.098091 kubelet[2365]: I0317 18:55:32.098040 2365 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:55:32.098091 kubelet[2365]: I0317 18:55:32.098069 2365 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:55:32.098149 kubelet[2365]: I0317 18:55:32.098145 2365 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:55:32.098169 kubelet[2365]: I0317 18:55:32.098154 2365 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:55:32.098169 kubelet[2365]: I0317 18:55:32.098166 2365 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:55:32.098206 kubelet[2365]: I0317 18:55:32.098176 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:55:32.111379 kernel: audit: type=1400 audit(1742237732.102:225): avc: denied { mac_admin } for pid=2365 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:32.111438 kernel: audit: type=1401 audit(1742237732.102:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:32.111456 kernel: audit: type=1300 audit(1742237732.102:225): arch=c000003e syscall=188 success=no exit=-22 a0=c0008539b0 a1=c000878c90 a2=c000853980 a3=25 items=0 ppid=1 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:32.102000 audit[2365]: AVC avc: denied { mac_admin } for pid=2365 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:32.102000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:32.102000 audit[2365]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008539b0 a1=c000878c90 a2=c000853980 a3=25 items=0 ppid=1 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.101318 2365 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.101411 2365 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.101607 2365 server.go:1264] "Started kubelet" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.102740 2365 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.102758 2365 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.102769 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.106017 2365 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.106540 2365 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.107305 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.107403 2365 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.108995 2365 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.109795 2365 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.109854 2365 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.110729 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.111257 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:55:32.111585 kubelet[2365]: I0317 18:55:32.111269 2365 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:55:32.111882 kubelet[2365]: I0317 18:55:32.111277 2365 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:55:32.111882 kubelet[2365]: E0317 18:55:32.111298 2365 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:55:32.122238 kernel: audit: type=1327 audit(1742237732.102:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:32.122292 kernel: audit: type=1400 audit(1742237732.102:226): avc: denied { mac_admin } for pid=2365 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:32.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:32.102000 audit[2365]: AVC avc: denied { mac_admin } for pid=2365 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:32.123960 kubelet[2365]: I0317 18:55:32.123864 2365 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:55:32.123960 kubelet[2365]: I0317 18:55:32.123919 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:55:32.130775 kubelet[2365]: I0317 18:55:32.129714 2365 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:55:32.135129 kernel: audit: type=1401 audit(1742237732.102:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:32.102000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:32.144355 kernel: audit: type=1300 audit(1742237732.102:226): arch=c000003e syscall=188 success=no exit=-22 a0=c000be8240 a1=c000878ca8 a2=c000853a40 a3=25 items=0 ppid=1 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:32.102000 audit[2365]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000be8240 a1=c000878ca8 a2=c000853a40 a3=25 items=0 ppid=1 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:32.149442 kernel: audit: type=1327 audit(1742237732.102:226): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:32.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:32.164711 kubelet[2365]: I0317 18:55:32.164694 2365 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:55:32.164711 kubelet[2365]: I0317 18:55:32.164705 2365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:55:32.164711 kubelet[2365]: I0317 18:55:32.164715 2365 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:55:32.164833 kubelet[2365]: I0317 18:55:32.164819 2365 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:55:32.164833 kubelet[2365]: I0317 18:55:32.164825 2365 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:55:32.164870 kubelet[2365]: I0317 18:55:32.164837 2365 policy_none.go:49] "None policy: Start" Mar 17 18:55:32.165379 kubelet[2365]: I0317 18:55:32.165368 2365 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:55:32.165379 kubelet[2365]: I0317 18:55:32.165379 2365 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:55:32.165487 kubelet[2365]: I0317 18:55:32.165478 2365 state_mem.go:75] "Updated machine memory state" Mar 17 18:55:32.178268 kubelet[2365]: I0317 18:55:32.178259 2365 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:55:32.178358 kubelet[2365]: I0317 18:55:32.178349 2365 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:55:32.178488 kubelet[2365]: I0317 18:55:32.178469 2365 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:55:32.177000 audit[2365]: AVC avc: denied { mac_admin } for pid=2365 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:55:32.177000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:55:32.177000 audit[2365]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00104c690 a1=c001221ad0 a2=c00104c660 a3=25 items=0 ppid=1 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:32.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:55:32.178767 kubelet[2365]: I0317 18:55:32.178761 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:55:32.209886 kubelet[2365]: I0317 18:55:32.209866 2365 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:55:32.211833 kubelet[2365]: I0317 18:55:32.211767 2365 topology_manager.go:215] "Topology Admit Handler" podUID="91ab5aab2e8291cf9785bca624e4e79f" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:55:32.212000 kubelet[2365]: I0317 18:55:32.211990 2365 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:55:32.213560 kubelet[2365]: I0317 18:55:32.213549 2365 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:55:32.217086 kubelet[2365]: E0317 18:55:32.215646 2365 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:32.217086 kubelet[2365]: I0317 18:55:32.214204 2365 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 18:55:32.217086 kubelet[2365]: I0317 18:55:32.215731 2365 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:55:32.311287 kubelet[2365]: I0317 18:55:32.311266 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:32.311287 kubelet[2365]: I0317 18:55:32.311288 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:55:32.311405 kubelet[2365]: I0317 18:55:32.311300 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91ab5aab2e8291cf9785bca624e4e79f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"91ab5aab2e8291cf9785bca624e4e79f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:32.311405 kubelet[2365]: I0317 18:55:32.311334 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91ab5aab2e8291cf9785bca624e4e79f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"91ab5aab2e8291cf9785bca624e4e79f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:32.311405 kubelet[2365]: I0317 18:55:32.311344 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:32.311405 kubelet[2365]: I0317 18:55:32.311353 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:32.311405 kubelet[2365]: I0317 18:55:32.311363 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:32.311526 kubelet[2365]: I0317 18:55:32.311372 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:55:32.311526 kubelet[2365]: I0317 18:55:32.311406 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91ab5aab2e8291cf9785bca624e4e79f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"91ab5aab2e8291cf9785bca624e4e79f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:33.100866 kubelet[2365]: I0317 18:55:33.100848 2365 apiserver.go:52] "Watching apiserver" Mar 17 18:55:33.152989 kubelet[2365]: E0317 18:55:33.152970 2365 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:55:33.169155 kubelet[2365]: I0317 18:55:33.169123 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.1691107370000005 podStartE2EDuration="4.169110737s" podCreationTimestamp="2025-03-17 18:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:55:33.165474291 +0000 UTC m=+1.181019070" watchObservedRunningTime="2025-03-17 18:55:33.169110737 +0000 UTC m=+1.184655509" Mar 17 18:55:33.175316 kubelet[2365]: I0317 18:55:33.175286 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.175274179 podStartE2EDuration="1.175274179s" podCreationTimestamp="2025-03-17 18:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:55:33.169597935 +0000 UTC m=+1.185142710" watchObservedRunningTime="2025-03-17 18:55:33.175274179 +0000 UTC m=+1.190818957" Mar 17 18:55:33.179642 kubelet[2365]: I0317 18:55:33.179610 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.179602238 podStartE2EDuration="1.179602238s" podCreationTimestamp="2025-03-17 18:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:55:33.175769086 +0000 UTC m=+1.191313865" watchObservedRunningTime="2025-03-17 18:55:33.179602238 +0000 UTC m=+1.195147014" Mar 17 18:55:33.210223 kubelet[2365]: I0317 18:55:33.210202 2365 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:55:36.005311 sudo[1635]: pam_unix(sudo:session): session closed for user root Mar 17 18:55:36.006603 kernel: kauditd_printk_skb: 4 callbacks suppressed Mar 17 18:55:36.006650 kernel: audit: type=1106 audit(1742237736.004:228): pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:36.004000 audit[1635]: USER_END pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:36.005000 audit[1635]: CRED_DISP pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:36.012049 kernel: audit: type=1104 audit(1742237736.005:229): pid=1635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:55:36.014013 sshd[1629]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:36.014000 audit[1629]: USER_END pid=1629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:36.018782 kernel: audit: type=1106 audit(1742237736.014:230): pid=1629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:36.018908 systemd[1]: sshd@6-139.178.70.99:22-139.178.68.195:43710.service: Deactivated successfully. Mar 17 18:55:36.019528 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:55:36.019778 systemd-logind[1338]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:55:36.014000 audit[1629]: CRED_DISP pid=1629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:36.023416 systemd-logind[1338]: Removed session 9. Mar 17 18:55:36.023677 kernel: audit: type=1104 audit(1742237736.014:231): pid=1629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:55:36.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.99:22-139.178.68.195:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:36.028701 kernel: audit: type=1131 audit(1742237736.018:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.99:22-139.178.68.195:43710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:55:45.741237 kubelet[2365]: I0317 18:55:45.741215 2365 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:55:45.741899 env[1374]: time="2025-03-17T18:55:45.741877435Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:55:45.744063 kubelet[2365]: I0317 18:55:45.744049 2365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:55:46.408648 kubelet[2365]: I0317 18:55:46.408622 2365 topology_manager.go:215] "Topology Admit Handler" podUID="fc855527-bce0-4714-98ed-531c132f9ba5" podNamespace="kube-system" podName="kube-proxy-zs5zg" Mar 17 18:55:46.498297 kubelet[2365]: I0317 18:55:46.498259 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc855527-bce0-4714-98ed-531c132f9ba5-kube-proxy\") pod \"kube-proxy-zs5zg\" (UID: \"fc855527-bce0-4714-98ed-531c132f9ba5\") " pod="kube-system/kube-proxy-zs5zg" Mar 17 18:55:46.498297 kubelet[2365]: I0317 18:55:46.498298 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc855527-bce0-4714-98ed-531c132f9ba5-lib-modules\") pod \"kube-proxy-zs5zg\" (UID: \"fc855527-bce0-4714-98ed-531c132f9ba5\") " pod="kube-system/kube-proxy-zs5zg" Mar 17 18:55:46.498438 kubelet[2365]: I0317 18:55:46.498311 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7r6\" (UniqueName: \"kubernetes.io/projected/fc855527-bce0-4714-98ed-531c132f9ba5-kube-api-access-kt7r6\") pod \"kube-proxy-zs5zg\" (UID: \"fc855527-bce0-4714-98ed-531c132f9ba5\") " pod="kube-system/kube-proxy-zs5zg" Mar 17 18:55:46.498438 kubelet[2365]: I0317 18:55:46.498321 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc855527-bce0-4714-98ed-531c132f9ba5-xtables-lock\") pod \"kube-proxy-zs5zg\" (UID: \"fc855527-bce0-4714-98ed-531c132f9ba5\") " pod="kube-system/kube-proxy-zs5zg" Mar 17 18:55:46.629096 kubelet[2365]: I0317 18:55:46.629069 2365 topology_manager.go:215] "Topology Admit Handler" podUID="bb88f754-ca8d-4b01-8c16-5d2c339445ec" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-wc6zp" Mar 17 18:55:46.700320 kubelet[2365]: I0317 18:55:46.700249 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446mk\" (UniqueName: \"kubernetes.io/projected/bb88f754-ca8d-4b01-8c16-5d2c339445ec-kube-api-access-446mk\") pod \"tigera-operator-7bc55997bb-wc6zp\" (UID: \"bb88f754-ca8d-4b01-8c16-5d2c339445ec\") " pod="tigera-operator/tigera-operator-7bc55997bb-wc6zp" Mar 17 18:55:46.700320 kubelet[2365]: I0317 18:55:46.700301 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb88f754-ca8d-4b01-8c16-5d2c339445ec-var-lib-calico\") pod \"tigera-operator-7bc55997bb-wc6zp\" (UID: \"bb88f754-ca8d-4b01-8c16-5d2c339445ec\") " pod="tigera-operator/tigera-operator-7bc55997bb-wc6zp" Mar 17 18:55:46.712114 env[1374]: time="2025-03-17T18:55:46.711794126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zs5zg,Uid:fc855527-bce0-4714-98ed-531c132f9ba5,Namespace:kube-system,Attempt:0,}" Mar 17 18:55:46.752354 env[1374]: time="2025-03-17T18:55:46.752304910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:46.752617 env[1374]: time="2025-03-17T18:55:46.752341701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:46.752617 env[1374]: time="2025-03-17T18:55:46.752353203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:46.752617 env[1374]: time="2025-03-17T18:55:46.752481537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe2fbfed7a7f8a35c699cd0dd831daf8aedaf8625052db3ba7fb5a0cd411ffff pid=2448 runtime=io.containerd.runc.v2 Mar 17 18:55:46.786139 env[1374]: time="2025-03-17T18:55:46.786113980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zs5zg,Uid:fc855527-bce0-4714-98ed-531c132f9ba5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe2fbfed7a7f8a35c699cd0dd831daf8aedaf8625052db3ba7fb5a0cd411ffff\"" Mar 17 18:55:46.788197 env[1374]: time="2025-03-17T18:55:46.788179778Z" level=info msg="CreateContainer within sandbox \"fe2fbfed7a7f8a35c699cd0dd831daf8aedaf8625052db3ba7fb5a0cd411ffff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:55:46.796442 env[1374]: time="2025-03-17T18:55:46.796413042Z" level=info msg="CreateContainer within sandbox \"fe2fbfed7a7f8a35c699cd0dd831daf8aedaf8625052db3ba7fb5a0cd411ffff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f255fe398171ba86fdd9ac27e964f93a884c9b2c43138db7dec475f8d3683cbc\"" Mar 17 18:55:46.797064 env[1374]: time="2025-03-17T18:55:46.797049751Z" level=info msg="StartContainer for \"f255fe398171ba86fdd9ac27e964f93a884c9b2c43138db7dec475f8d3683cbc\"" Mar 17 18:55:46.831557 env[1374]: time="2025-03-17T18:55:46.831535983Z" level=info msg="StartContainer for \"f255fe398171ba86fdd9ac27e964f93a884c9b2c43138db7dec475f8d3683cbc\" returns successfully" Mar 17 18:55:46.932138 env[1374]: time="2025-03-17T18:55:46.932115447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wc6zp,Uid:bb88f754-ca8d-4b01-8c16-5d2c339445ec,Namespace:tigera-operator,Attempt:0,}" Mar 17 18:55:46.948789 env[1374]: time="2025-03-17T18:55:46.948744036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:46.948903 env[1374]: time="2025-03-17T18:55:46.948777796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:46.948903 env[1374]: time="2025-03-17T18:55:46.948788956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:46.949037 env[1374]: time="2025-03-17T18:55:46.948984881Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4c45841bdb7d0cc6155dd164f26af115a721231b2516ba02cb66e8f4b9c9ddd pid=2519 runtime=io.containerd.runc.v2 Mar 17 18:55:46.988236 env[1374]: time="2025-03-17T18:55:46.988172611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-wc6zp,Uid:bb88f754-ca8d-4b01-8c16-5d2c339445ec,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b4c45841bdb7d0cc6155dd164f26af115a721231b2516ba02cb66e8f4b9c9ddd\"" Mar 17 18:55:46.989521 env[1374]: time="2025-03-17T18:55:46.989509422Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Mar 17 18:55:47.187321 kubelet[2365]: I0317 18:55:47.187284 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zs5zg" podStartSLOduration=1.187270855 podStartE2EDuration="1.187270855s" podCreationTimestamp="2025-03-17 18:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:55:47.187160193 +0000 UTC m=+15.202704983" watchObservedRunningTime="2025-03-17 18:55:47.187270855 +0000 UTC m=+15.202815632" Mar 17 18:55:47.610635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657848657.mount: Deactivated successfully. Mar 17 18:55:47.752000 audit[2582]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.752000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefc28b1e0 a2=0 a3=7ffefc28b1cc items=0 ppid=2501 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.759062 kernel: audit: type=1325 audit(1742237747.752:233): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.759126 kernel: audit: type=1300 audit(1742237747.752:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefc28b1e0 a2=0 a3=7ffefc28b1cc items=0 ppid=2501 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.759145 kernel: audit: type=1327 audit(1742237747.752:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:55:47.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:55:47.752000 audit[2583]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.762900 kernel: audit: type=1325 audit(1742237747.752:234): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.752000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc95d3c790 a2=0 a3=7ffc95d3c77c items=0 ppid=2501 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:55:47.770128 kernel: audit: type=1300 audit(1742237747.752:234): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc95d3c790 a2=0 a3=7ffc95d3c77c items=0 ppid=2501 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.770176 kernel: audit: type=1327 audit(1742237747.752:234): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:55:47.770195 kernel: audit: type=1325 audit(1742237747.754:235): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.754000 audit[2585]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.754000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea5b1c530 a2=0 a3=7ffea5b1c51c items=0 ppid=2501 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.775682 kernel: audit: type=1300 audit(1742237747.754:235): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea5b1c530 a2=0 a3=7ffea5b1c51c items=0 ppid=2501 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:55:47.777503 kernel: audit: type=1327 audit(1742237747.754:235): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:55:47.777533 kernel: audit: type=1325 audit(1742237747.758:236): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.758000 audit[2587]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.758000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca403b0d0 a2=0 a3=7ffca403b0bc items=0 ppid=2501 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:55:47.763000 audit[2589]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.763000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc524725e0 a2=0 a3=7ffc524725cc items=0 ppid=2501 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:55:47.763000 audit[2588]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.763000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc699e5c0 a2=0 a3=7fffc699e5ac items=0 ppid=2501 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:55:47.859000 audit[2590]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.859000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb257ed50 a2=0 a3=7ffcb257ed3c items=0 ppid=2501 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:55:47.863000 audit[2592]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.863000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb562c560 a2=0 a3=7ffcb562c54c items=0 ppid=2501 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Mar 17 18:55:47.867000 audit[2595]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.867000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffe7b76c50 a2=0 a3=7fffe7b76c3c items=0 ppid=2501 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Mar 17 18:55:47.868000 audit[2596]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.868000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2730a830 a2=0 a3=7ffd2730a81c items=0 ppid=2501 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:55:47.870000 audit[2598]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.870000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff14cd5d70 a2=0 a3=7fff14cd5d5c items=0 ppid=2501 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:55:47.871000 audit[2599]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.871000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaac8b040 a2=0 a3=7ffeaac8b02c items=0 ppid=2501 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:55:47.873000 audit[2601]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.873000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc46a3d970 a2=0 a3=7ffc46a3d95c items=0 ppid=2501 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:55:47.876000 audit[2604]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.876000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffbc0cd7b0 a2=0 a3=7fffbc0cd79c items=0 ppid=2501 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Mar 17 18:55:47.877000 audit[2605]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.877000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdabc1db80 a2=0 a3=7ffdabc1db6c items=0 ppid=2501 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:55:47.879000 audit[2607]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.879000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe330055b0 a2=0 a3=7ffe3300559c items=0 ppid=2501 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:55:47.880000 audit[2608]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.880000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff172fa310 a2=0 a3=7fff172fa2fc items=0 ppid=2501 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:55:47.882000 audit[2610]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.882000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe134b1860 a2=0 a3=7ffe134b184c items=0 ppid=2501 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:55:47.884000 audit[2613]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.884000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfb514cc0 a2=0 a3=7ffdfb514cac items=0 ppid=2501 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:55:47.887000 audit[2616]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.887000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc03c73b20 a2=0 a3=7ffc03c73b0c items=0 ppid=2501 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:55:47.888000 audit[2617]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.888000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc41db87d0 a2=0 a3=7ffc41db87bc items=0 ppid=2501 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:55:47.891000 audit[2619]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.891000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffde06680f0 a2=0 a3=7ffde06680dc items=0 ppid=2501 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:55:47.896000 audit[2622]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.896000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd789500c0 a2=0 a3=7ffd789500ac items=0 ppid=2501 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:55:47.896000 audit[2623]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.896000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5bcecea0 a2=0 a3=7ffd5bcece8c items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:55:47.898000 audit[2625]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:55:47.898000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc350d7f60 a2=0 a3=7ffc350d7f4c items=0 ppid=2501 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:55:47.914000 audit[2631]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:47.914000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fffafbed800 a2=0 a3=7fffafbed7ec items=0 ppid=2501 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:47.919000 audit[2631]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:47.919000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffafbed800 a2=0 a3=7fffafbed7ec items=0 ppid=2501 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:47.920000 audit[2636]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.920000 audit[2636]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd2a2f1b80 a2=0 a3=7ffd2a2f1b6c items=0 ppid=2501 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:55:47.922000 audit[2638]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.922000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffa1ebee70 a2=0 a3=7fffa1ebee5c items=0 ppid=2501 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Mar 17 18:55:47.924000 audit[2641]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.924000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffca1b1b690 a2=0 a3=7ffca1b1b67c items=0 ppid=2501 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Mar 17 18:55:47.925000 audit[2642]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.925000 audit[2642]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc66376c70 a2=0 a3=7ffc66376c5c items=0 ppid=2501 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:55:47.927000 audit[2644]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.927000 audit[2644]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2eb288c0 a2=0 a3=7ffd2eb288ac items=0 ppid=2501 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:55:47.928000 audit[2645]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.928000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd557fd2d0 a2=0 a3=7ffd557fd2bc items=0 ppid=2501 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:55:47.929000 audit[2647]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.929000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdd8494390 a2=0 a3=7ffdd849437c items=0 ppid=2501 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Mar 17 18:55:47.931000 audit[2650]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.931000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff2237a250 a2=0 a3=7fff2237a23c items=0 ppid=2501 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:55:47.932000 audit[2651]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.932000 audit[2651]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd77738d30 a2=0 a3=7ffd77738d1c items=0 ppid=2501 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:55:47.934000 audit[2653]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.934000 audit[2653]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5de43870 a2=0 a3=7fff5de4385c items=0 ppid=2501 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:55:47.934000 audit[2654]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.934000 audit[2654]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffce9aff60 a2=0 a3=7fffce9aff4c items=0 ppid=2501 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:55:47.936000 audit[2656]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.936000 audit[2656]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffddef50d90 a2=0 a3=7ffddef50d7c items=0 ppid=2501 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:55:47.939000 audit[2659]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.939000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc23d46380 a2=0 a3=7ffc23d4636c items=0 ppid=2501 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:55:47.941000 audit[2662]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.941000 audit[2662]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff65099000 a2=0 a3=7fff65098fec items=0 ppid=2501 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Mar 17 18:55:47.942000 audit[2663]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.942000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe43350b20 a2=0 a3=7ffe43350b0c items=0 ppid=2501 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:55:47.943000 audit[2665]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.943000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcc09e8650 a2=0 a3=7ffcc09e863c items=0 ppid=2501 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:55:47.945000 audit[2668]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.945000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffe2c4c710 a2=0 a3=7fffe2c4c6fc items=0 ppid=2501 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:55:47.946000 audit[2669]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.946000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc79335f60 a2=0 a3=7ffc79335f4c items=0 ppid=2501 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:55:47.948000 audit[2671]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.948000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe469c9810 a2=0 a3=7ffe469c97fc items=0 ppid=2501 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:55:47.948000 audit[2672]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.948000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf20cc950 a2=0 a3=7ffdf20cc93c items=0 ppid=2501 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:55:47.950000 audit[2674]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.950000 audit[2674]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd26efde40 a2=0 a3=7ffd26efde2c items=0 ppid=2501 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:55:47.952000 audit[2677]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:55:47.952000 audit[2677]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdcaa31850 a2=0 a3=7ffdcaa3183c items=0 ppid=2501 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:55:47.954000 audit[2679]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:55:47.954000 audit[2679]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc51c06ae0 a2=0 a3=7ffc51c06acc items=0 ppid=2501 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.954000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:47.954000 audit[2679]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:55:47.954000 audit[2679]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc51c06ae0 a2=0 a3=7ffc51c06acc items=0 ppid=2501 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:47.954000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:49.891082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779435668.mount: Deactivated successfully. Mar 17 18:55:50.610831 env[1374]: time="2025-03-17T18:55:50.610804835Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:50.612021 env[1374]: time="2025-03-17T18:55:50.612008258Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:50.613051 env[1374]: time="2025-03-17T18:55:50.613039092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:50.614036 env[1374]: time="2025-03-17T18:55:50.614023370Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:50.616343 env[1374]: time="2025-03-17T18:55:50.616327018Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Mar 17 18:55:50.622220 env[1374]: time="2025-03-17T18:55:50.622203542Z" level=info msg="CreateContainer within sandbox \"b4c45841bdb7d0cc6155dd164f26af115a721231b2516ba02cb66e8f4b9c9ddd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 18:55:50.629993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343231508.mount: Deactivated successfully. Mar 17 18:55:50.639977 env[1374]: time="2025-03-17T18:55:50.639949956Z" level=info msg="CreateContainer within sandbox \"b4c45841bdb7d0cc6155dd164f26af115a721231b2516ba02cb66e8f4b9c9ddd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"30292345d5e02403e0ba24b2f31c0d3f0b0bdb3a5d29ecd0753f2b8437bfdb34\"" Mar 17 18:55:50.640290 env[1374]: time="2025-03-17T18:55:50.640273135Z" level=info msg="StartContainer for \"30292345d5e02403e0ba24b2f31c0d3f0b0bdb3a5d29ecd0753f2b8437bfdb34\"" Mar 17 18:55:50.734687 env[1374]: time="2025-03-17T18:55:50.734653605Z" level=info msg="StartContainer for \"30292345d5e02403e0ba24b2f31c0d3f0b0bdb3a5d29ecd0753f2b8437bfdb34\" returns successfully" Mar 17 18:55:50.851306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755594486.mount: Deactivated successfully. Mar 17 18:55:52.119899 kubelet[2365]: I0317 18:55:52.119866 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-wc6zp" podStartSLOduration=2.491658608 podStartE2EDuration="6.119853932s" podCreationTimestamp="2025-03-17 18:55:46 +0000 UTC" firstStartedPulling="2025-03-17 18:55:46.989176834 +0000 UTC m=+15.004721604" lastFinishedPulling="2025-03-17 18:55:50.617372157 +0000 UTC m=+18.632916928" observedRunningTime="2025-03-17 18:55:51.179690673 +0000 UTC m=+19.195235443" watchObservedRunningTime="2025-03-17 18:55:52.119853932 +0000 UTC m=+20.135398705" Mar 17 18:55:53.365134 kernel: kauditd_printk_skb: 143 callbacks suppressed Mar 17 18:55:53.365230 kernel: audit: type=1325 audit(1742237753.361:284): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.361000 audit[2719]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.369251 kernel: audit: type=1300 audit(1742237753.361:284): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe4d174330 a2=0 a3=7ffe4d17431c items=0 ppid=2501 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.361000 audit[2719]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe4d174330 a2=0 a3=7ffe4d17431c items=0 ppid=2501 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.371164 kernel: audit: type=1327 audit(1742237753.361:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.373242 kernel: audit: type=1325 audit(1742237753.364:285): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.364000 audit[2719]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.364000 audit[2719]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4d174330 a2=0 a3=0 items=0 ppid=2501 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.376994 kernel: audit: type=1300 audit(1742237753.364:285): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4d174330 a2=0 a3=0 items=0 ppid=2501 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.379024 kernel: audit: type=1327 audit(1742237753.364:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.379000 audit[2721]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.379000 audit[2721]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6ce94e70 a2=0 a3=7fff6ce94e5c items=0 ppid=2501 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.386839 kernel: audit: type=1325 audit(1742237753.379:286): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.386908 kernel: audit: type=1300 audit(1742237753.379:286): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff6ce94e70 a2=0 a3=7fff6ce94e5c items=0 ppid=2501 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.386931 kernel: audit: type=1327 audit(1742237753.379:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.382000 audit[2721]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.390577 kernel: audit: type=1325 audit(1742237753.382:287): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:53.382000 audit[2721]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6ce94e70 a2=0 a3=0 items=0 ppid=2501 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:53.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:53.497839 kubelet[2365]: I0317 18:55:53.497805 2365 topology_manager.go:215] "Topology Admit Handler" podUID="9a671705-409d-4e4c-9c6d-157cce87a4cc" podNamespace="calico-system" podName="calico-typha-84d4bf9c78-2xjkz" Mar 17 18:55:53.541017 kubelet[2365]: I0317 18:55:53.540991 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a671705-409d-4e4c-9c6d-157cce87a4cc-typha-certs\") pod \"calico-typha-84d4bf9c78-2xjkz\" (UID: \"9a671705-409d-4e4c-9c6d-157cce87a4cc\") " pod="calico-system/calico-typha-84d4bf9c78-2xjkz" Mar 17 18:55:53.541017 kubelet[2365]: I0317 18:55:53.541022 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a671705-409d-4e4c-9c6d-157cce87a4cc-tigera-ca-bundle\") pod \"calico-typha-84d4bf9c78-2xjkz\" (UID: \"9a671705-409d-4e4c-9c6d-157cce87a4cc\") " pod="calico-system/calico-typha-84d4bf9c78-2xjkz" Mar 17 18:55:53.541163 kubelet[2365]: I0317 18:55:53.541035 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gwrg\" (UniqueName: \"kubernetes.io/projected/9a671705-409d-4e4c-9c6d-157cce87a4cc-kube-api-access-9gwrg\") pod \"calico-typha-84d4bf9c78-2xjkz\" (UID: \"9a671705-409d-4e4c-9c6d-157cce87a4cc\") " pod="calico-system/calico-typha-84d4bf9c78-2xjkz" Mar 17 18:55:53.555713 kubelet[2365]: I0317 18:55:53.555686 2365 topology_manager.go:215] "Topology Admit Handler" podUID="b79184a0-cf57-4a18-ab8b-2e32e2be0da2" podNamespace="calico-system" podName="calico-node-cqsxd" Mar 17 18:55:53.641932 kubelet[2365]: I0317 18:55:53.641857 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-lib-modules\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.641932 kubelet[2365]: I0317 18:55:53.641882 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-cni-log-dir\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.641932 kubelet[2365]: I0317 18:55:53.641894 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-cni-bin-dir\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.641932 kubelet[2365]: I0317 18:55:53.641919 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-tigera-ca-bundle\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642084 kubelet[2365]: I0317 18:55:53.641936 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-node-certs\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642084 kubelet[2365]: I0317 18:55:53.641948 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-flexvol-driver-host\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642084 kubelet[2365]: I0317 18:55:53.641959 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-xtables-lock\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642084 kubelet[2365]: I0317 18:55:53.641968 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-var-lib-calico\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642084 kubelet[2365]: I0317 18:55:53.641977 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-var-run-calico\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642178 kubelet[2365]: I0317 18:55:53.641987 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-policysync\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642178 kubelet[2365]: I0317 18:55:53.642003 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-cni-net-dir\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.642178 kubelet[2365]: I0317 18:55:53.642015 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdkn\" (UniqueName: \"kubernetes.io/projected/b79184a0-cf57-4a18-ab8b-2e32e2be0da2-kube-api-access-dwdkn\") pod \"calico-node-cqsxd\" (UID: \"b79184a0-cf57-4a18-ab8b-2e32e2be0da2\") " pod="calico-system/calico-node-cqsxd" Mar 17 18:55:53.673816 kubelet[2365]: I0317 18:55:53.673788 2365 topology_manager.go:215] "Topology Admit Handler" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" podNamespace="calico-system" podName="csi-node-driver-9xnh4" Mar 17 18:55:53.673973 kubelet[2365]: E0317 18:55:53.673958 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:55:53.742147 kubelet[2365]: I0317 18:55:53.742123 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1165f5ec-1445-4386-b540-a9b8a16322f3-socket-dir\") pod \"csi-node-driver-9xnh4\" (UID: \"1165f5ec-1445-4386-b540-a9b8a16322f3\") " pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:55:53.742253 kubelet[2365]: I0317 18:55:53.742161 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1165f5ec-1445-4386-b540-a9b8a16322f3-varrun\") pod \"csi-node-driver-9xnh4\" (UID: \"1165f5ec-1445-4386-b540-a9b8a16322f3\") " pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:55:53.742253 kubelet[2365]: I0317 18:55:53.742188 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wcmr\" (UniqueName: \"kubernetes.io/projected/1165f5ec-1445-4386-b540-a9b8a16322f3-kube-api-access-2wcmr\") pod \"csi-node-driver-9xnh4\" (UID: \"1165f5ec-1445-4386-b540-a9b8a16322f3\") " pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:55:53.742253 kubelet[2365]: I0317 18:55:53.742210 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1165f5ec-1445-4386-b540-a9b8a16322f3-kubelet-dir\") pod \"csi-node-driver-9xnh4\" (UID: \"1165f5ec-1445-4386-b540-a9b8a16322f3\") " pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:55:53.742253 kubelet[2365]: I0317 18:55:53.742229 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1165f5ec-1445-4386-b540-a9b8a16322f3-registration-dir\") pod \"csi-node-driver-9xnh4\" (UID: \"1165f5ec-1445-4386-b540-a9b8a16322f3\") " pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:55:53.748085 kubelet[2365]: E0317 18:55:53.748059 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.748085 kubelet[2365]: W0317 18:55:53.748083 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.748198 kubelet[2365]: E0317 18:55:53.748102 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.756017 kubelet[2365]: E0317 18:55:53.756000 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.756017 kubelet[2365]: W0317 18:55:53.756012 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.756123 kubelet[2365]: E0317 18:55:53.756026 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.801505 env[1374]: time="2025-03-17T18:55:53.801468085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d4bf9c78-2xjkz,Uid:9a671705-409d-4e4c-9c6d-157cce87a4cc,Namespace:calico-system,Attempt:0,}" Mar 17 18:55:53.812033 env[1374]: time="2025-03-17T18:55:53.811943765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:53.812132 env[1374]: time="2025-03-17T18:55:53.811982403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:53.812132 env[1374]: time="2025-03-17T18:55:53.812022415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:53.812272 env[1374]: time="2025-03-17T18:55:53.812234926Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5004721d8f0ef343d60e19204c0f2e7b0e39d863798515b0b8fd0189a75891d pid=2736 runtime=io.containerd.runc.v2 Mar 17 18:55:53.843200 kubelet[2365]: E0317 18:55:53.843178 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.843200 kubelet[2365]: W0317 18:55:53.843193 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.843200 kubelet[2365]: E0317 18:55:53.843208 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.843475 kubelet[2365]: E0317 18:55:53.843465 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.843475 kubelet[2365]: W0317 18:55:53.843472 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.843529 kubelet[2365]: E0317 18:55:53.843478 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.843579 kubelet[2365]: E0317 18:55:53.843570 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.843579 kubelet[2365]: W0317 18:55:53.843576 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.843636 kubelet[2365]: E0317 18:55:53.843585 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.843711 kubelet[2365]: E0317 18:55:53.843689 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.843711 kubelet[2365]: W0317 18:55:53.843704 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.843711 kubelet[2365]: E0317 18:55:53.843710 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.843854 kubelet[2365]: E0317 18:55:53.843798 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.843854 kubelet[2365]: W0317 18:55:53.843806 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.843854 kubelet[2365]: E0317 18:55:53.843813 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.843977 kubelet[2365]: E0317 18:55:53.843968 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.844104 kubelet[2365]: W0317 18:55:53.844043 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.844104 kubelet[2365]: E0317 18:55:53.844059 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.844210 kubelet[2365]: E0317 18:55:53.844203 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.844259 kubelet[2365]: W0317 18:55:53.844246 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.844314 kubelet[2365]: E0317 18:55:53.844305 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.844456 kubelet[2365]: E0317 18:55:53.844448 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.844519 kubelet[2365]: W0317 18:55:53.844511 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.844570 kubelet[2365]: E0317 18:55:53.844562 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.846705 kubelet[2365]: E0317 18:55:53.846697 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.846764 kubelet[2365]: W0317 18:55:53.846756 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.846820 kubelet[2365]: E0317 18:55:53.846812 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.847736 kubelet[2365]: E0317 18:55:53.847729 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.847803 kubelet[2365]: W0317 18:55:53.847794 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.847880 kubelet[2365]: E0317 18:55:53.847870 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.848021 kubelet[2365]: E0317 18:55:53.848015 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.848098 kubelet[2365]: W0317 18:55:53.848085 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.848184 kubelet[2365]: E0317 18:55:53.848177 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.849941 kubelet[2365]: E0317 18:55:53.849933 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.850001 kubelet[2365]: W0317 18:55:53.849992 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.850754 kubelet[2365]: E0317 18:55:53.850747 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.850813 kubelet[2365]: W0317 18:55:53.850804 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.851569 kubelet[2365]: E0317 18:55:53.851561 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.851657 kubelet[2365]: E0317 18:55:53.851633 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.851719 kubelet[2365]: W0317 18:55:53.851706 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.851844 kubelet[2365]: E0317 18:55:53.851838 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.851888 kubelet[2365]: W0317 18:55:53.851880 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.851999 kubelet[2365]: E0317 18:55:53.851993 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.852044 kubelet[2365]: W0317 18:55:53.852036 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.852091 kubelet[2365]: E0317 18:55:53.852083 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.852209 kubelet[2365]: E0317 18:55:53.852203 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.852254 kubelet[2365]: W0317 18:55:53.852246 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.852718 kubelet[2365]: E0317 18:55:53.852403 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.852718 kubelet[2365]: E0317 18:55:53.852351 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.852718 kubelet[2365]: E0317 18:55:53.852358 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.852718 kubelet[2365]: E0317 18:55:53.852514 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.852718 kubelet[2365]: W0317 18:55:53.852519 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.852718 kubelet[2365]: E0317 18:55:53.852524 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.852892 kubelet[2365]: E0317 18:55:53.851637 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.854596 kubelet[2365]: E0317 18:55:53.854589 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.854653 kubelet[2365]: W0317 18:55:53.854644 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.855571 kubelet[2365]: E0317 18:55:53.855563 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.855781 kubelet[2365]: E0317 18:55:53.855771 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.855781 kubelet[2365]: W0317 18:55:53.855779 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.855848 kubelet[2365]: E0317 18:55:53.855788 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.855944 kubelet[2365]: E0317 18:55:53.855933 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.855944 kubelet[2365]: W0317 18:55:53.855940 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.856009 kubelet[2365]: E0317 18:55:53.855948 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.856102 kubelet[2365]: E0317 18:55:53.856093 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.856102 kubelet[2365]: W0317 18:55:53.856099 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.856189 kubelet[2365]: E0317 18:55:53.856178 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.856254 kubelet[2365]: E0317 18:55:53.856244 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.856254 kubelet[2365]: W0317 18:55:53.856250 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.856314 kubelet[2365]: E0317 18:55:53.856258 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.856336 kubelet[2365]: E0317 18:55:53.856329 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.856336 kubelet[2365]: W0317 18:55:53.856333 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.856384 kubelet[2365]: E0317 18:55:53.856338 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.856445 kubelet[2365]: E0317 18:55:53.856435 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.856445 kubelet[2365]: W0317 18:55:53.856442 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.856502 kubelet[2365]: E0317 18:55:53.856447 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.856531 kubelet[2365]: E0317 18:55:53.856525 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:53.856531 kubelet[2365]: W0317 18:55:53.856529 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:53.856576 kubelet[2365]: E0317 18:55:53.856534 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:53.857939 env[1374]: time="2025-03-17T18:55:53.857917209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqsxd,Uid:b79184a0-cf57-4a18-ab8b-2e32e2be0da2,Namespace:calico-system,Attempt:0,}" Mar 17 18:55:53.862095 env[1374]: time="2025-03-17T18:55:53.862079983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d4bf9c78-2xjkz,Uid:9a671705-409d-4e4c-9c6d-157cce87a4cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5004721d8f0ef343d60e19204c0f2e7b0e39d863798515b0b8fd0189a75891d\"" Mar 17 18:55:53.863649 env[1374]: time="2025-03-17T18:55:53.863629249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Mar 17 18:55:53.866426 env[1374]: time="2025-03-17T18:55:53.866384092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:55:53.866478 env[1374]: time="2025-03-17T18:55:53.866440546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:55:53.866478 env[1374]: time="2025-03-17T18:55:53.866457306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:55:53.866715 env[1374]: time="2025-03-17T18:55:53.866691237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb pid=2805 runtime=io.containerd.runc.v2 Mar 17 18:55:53.889565 env[1374]: time="2025-03-17T18:55:53.889536484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqsxd,Uid:b79184a0-cf57-4a18-ab8b-2e32e2be0da2,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\"" Mar 17 18:55:54.393000 audit[2842]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2842 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:54.393000 audit[2842]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fffd69a2040 a2=0 a3=7fffd69a202c items=0 ppid=2501 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:54.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:54.397000 audit[2842]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2842 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:55:54.397000 audit[2842]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd69a2040 a2=0 a3=0 items=0 ppid=2501 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:55:54.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:55:55.112255 kubelet[2365]: E0317 18:55:55.112232 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:55:55.514386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096809113.mount: Deactivated successfully. Mar 17 18:55:56.528894 env[1374]: time="2025-03-17T18:55:56.528866046Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:56.530016 env[1374]: time="2025-03-17T18:55:56.530004214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:56.531061 env[1374]: time="2025-03-17T18:55:56.531049683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:56.532124 env[1374]: time="2025-03-17T18:55:56.532112001Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:56.532533 env[1374]: time="2025-03-17T18:55:56.532515615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Mar 17 18:55:56.533527 env[1374]: time="2025-03-17T18:55:56.533515675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Mar 17 18:55:56.542250 env[1374]: time="2025-03-17T18:55:56.542212517Z" level=info msg="CreateContainer within sandbox \"d5004721d8f0ef343d60e19204c0f2e7b0e39d863798515b0b8fd0189a75891d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 18:55:56.548352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407933273.mount: Deactivated successfully. Mar 17 18:55:56.550179 env[1374]: time="2025-03-17T18:55:56.550157146Z" level=info msg="CreateContainer within sandbox \"d5004721d8f0ef343d60e19204c0f2e7b0e39d863798515b0b8fd0189a75891d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3da3a2700b16dbabee7dbbe595bff177a75057daf99e5768c281475e9ca0695c\"" Mar 17 18:55:56.551177 env[1374]: time="2025-03-17T18:55:56.550531048Z" level=info msg="StartContainer for \"3da3a2700b16dbabee7dbbe595bff177a75057daf99e5768c281475e9ca0695c\"" Mar 17 18:55:56.665179 env[1374]: time="2025-03-17T18:55:56.665148982Z" level=info msg="StartContainer for \"3da3a2700b16dbabee7dbbe595bff177a75057daf99e5768c281475e9ca0695c\" returns successfully" Mar 17 18:55:57.111894 kubelet[2365]: E0317 18:55:57.111687 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:55:57.214967 kubelet[2365]: I0317 18:55:57.214889 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84d4bf9c78-2xjkz" podStartSLOduration=1.544495778 podStartE2EDuration="4.21487848s" podCreationTimestamp="2025-03-17 18:55:53 +0000 UTC" firstStartedPulling="2025-03-17 18:55:53.862771726 +0000 UTC m=+21.878316493" lastFinishedPulling="2025-03-17 18:55:56.533154424 +0000 UTC m=+24.548699195" observedRunningTime="2025-03-17 18:55:57.214310361 +0000 UTC m=+25.229855140" watchObservedRunningTime="2025-03-17 18:55:57.21487848 +0000 UTC m=+25.230423255" Mar 17 18:55:57.256586 kubelet[2365]: E0317 18:55:57.256563 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.256586 kubelet[2365]: W0317 18:55:57.256579 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.256715 kubelet[2365]: E0317 18:55:57.256601 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.256715 kubelet[2365]: E0317 18:55:57.256708 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.256715 kubelet[2365]: W0317 18:55:57.256712 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.256778 kubelet[2365]: E0317 18:55:57.256718 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.256801 kubelet[2365]: E0317 18:55:57.256798 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.256824 kubelet[2365]: W0317 18:55:57.256802 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.256824 kubelet[2365]: E0317 18:55:57.256807 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.256904 kubelet[2365]: E0317 18:55:57.256892 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.256904 kubelet[2365]: W0317 18:55:57.256900 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.256958 kubelet[2365]: E0317 18:55:57.256905 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257002 kubelet[2365]: E0317 18:55:57.256989 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257002 kubelet[2365]: W0317 18:55:57.256995 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257002 kubelet[2365]: E0317 18:55:57.257000 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257088 kubelet[2365]: E0317 18:55:57.257078 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257088 kubelet[2365]: W0317 18:55:57.257085 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257131 kubelet[2365]: E0317 18:55:57.257090 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257176 kubelet[2365]: E0317 18:55:57.257167 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257176 kubelet[2365]: W0317 18:55:57.257174 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257220 kubelet[2365]: E0317 18:55:57.257179 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257267 kubelet[2365]: E0317 18:55:57.257257 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257267 kubelet[2365]: W0317 18:55:57.257264 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257319 kubelet[2365]: E0317 18:55:57.257270 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257364 kubelet[2365]: E0317 18:55:57.257354 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257364 kubelet[2365]: W0317 18:55:57.257363 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257409 kubelet[2365]: E0317 18:55:57.257369 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257456 kubelet[2365]: E0317 18:55:57.257445 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257456 kubelet[2365]: W0317 18:55:57.257453 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257499 kubelet[2365]: E0317 18:55:57.257458 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257547 kubelet[2365]: E0317 18:55:57.257538 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257547 kubelet[2365]: W0317 18:55:57.257545 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257592 kubelet[2365]: E0317 18:55:57.257550 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257635 kubelet[2365]: E0317 18:55:57.257626 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257635 kubelet[2365]: W0317 18:55:57.257633 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257695 kubelet[2365]: E0317 18:55:57.257637 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257736 kubelet[2365]: E0317 18:55:57.257725 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257736 kubelet[2365]: W0317 18:55:57.257733 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257784 kubelet[2365]: E0317 18:55:57.257738 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257825 kubelet[2365]: E0317 18:55:57.257814 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257825 kubelet[2365]: W0317 18:55:57.257821 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257871 kubelet[2365]: E0317 18:55:57.257826 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.257911 kubelet[2365]: E0317 18:55:57.257901 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.257911 kubelet[2365]: W0317 18:55:57.257910 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.257959 kubelet[2365]: E0317 18:55:57.257916 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265248 kubelet[2365]: E0317 18:55:57.265232 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265248 kubelet[2365]: W0317 18:55:57.265242 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265322 kubelet[2365]: E0317 18:55:57.265252 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265374 kubelet[2365]: E0317 18:55:57.265361 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265374 kubelet[2365]: W0317 18:55:57.265371 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265416 kubelet[2365]: E0317 18:55:57.265376 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265488 kubelet[2365]: E0317 18:55:57.265476 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265525 kubelet[2365]: W0317 18:55:57.265490 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265525 kubelet[2365]: E0317 18:55:57.265498 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265607 kubelet[2365]: E0317 18:55:57.265595 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265607 kubelet[2365]: W0317 18:55:57.265603 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265657 kubelet[2365]: E0317 18:55:57.265610 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265723 kubelet[2365]: E0317 18:55:57.265712 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265723 kubelet[2365]: W0317 18:55:57.265719 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265781 kubelet[2365]: E0317 18:55:57.265730 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265816 kubelet[2365]: E0317 18:55:57.265806 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265816 kubelet[2365]: W0317 18:55:57.265812 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265865 kubelet[2365]: E0317 18:55:57.265817 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.265924 kubelet[2365]: E0317 18:55:57.265911 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.265924 kubelet[2365]: W0317 18:55:57.265919 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.265984 kubelet[2365]: E0317 18:55:57.265926 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266113 kubelet[2365]: E0317 18:55:57.266102 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266147 kubelet[2365]: W0317 18:55:57.266111 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266147 kubelet[2365]: E0317 18:55:57.266124 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266226 kubelet[2365]: E0317 18:55:57.266215 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266226 kubelet[2365]: W0317 18:55:57.266224 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266295 kubelet[2365]: E0317 18:55:57.266230 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266326 kubelet[2365]: E0317 18:55:57.266312 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266326 kubelet[2365]: W0317 18:55:57.266316 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266388 kubelet[2365]: E0317 18:55:57.266321 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266414 kubelet[2365]: E0317 18:55:57.266408 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266414 kubelet[2365]: W0317 18:55:57.266413 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266453 kubelet[2365]: E0317 18:55:57.266419 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266516 kubelet[2365]: E0317 18:55:57.266504 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266545 kubelet[2365]: W0317 18:55:57.266512 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266567 kubelet[2365]: E0317 18:55:57.266549 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266720 kubelet[2365]: E0317 18:55:57.266708 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266720 kubelet[2365]: W0317 18:55:57.266716 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266771 kubelet[2365]: E0317 18:55:57.266724 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266814 kubelet[2365]: E0317 18:55:57.266803 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266840 kubelet[2365]: W0317 18:55:57.266816 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266840 kubelet[2365]: E0317 18:55:57.266821 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266901 kubelet[2365]: E0317 18:55:57.266891 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266901 kubelet[2365]: W0317 18:55:57.266898 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.266942 kubelet[2365]: E0317 18:55:57.266905 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.266998 kubelet[2365]: E0317 18:55:57.266988 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.266998 kubelet[2365]: W0317 18:55:57.266998 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.267071 kubelet[2365]: E0317 18:55:57.267008 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.267253 kubelet[2365]: E0317 18:55:57.267241 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.267253 kubelet[2365]: W0317 18:55:57.267249 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.267305 kubelet[2365]: E0317 18:55:57.267256 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:57.267355 kubelet[2365]: E0317 18:55:57.267339 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:57.267355 kubelet[2365]: W0317 18:55:57.267345 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:57.267355 kubelet[2365]: E0317 18:55:57.267350 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.183941 kubelet[2365]: I0317 18:55:58.183601 2365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:55:58.239476 env[1374]: time="2025-03-17T18:55:58.239441925Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:58.240165 env[1374]: time="2025-03-17T18:55:58.240148278Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:58.240868 env[1374]: time="2025-03-17T18:55:58.240853594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:58.242440 env[1374]: time="2025-03-17T18:55:58.242427034Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:55:58.242714 env[1374]: time="2025-03-17T18:55:58.242699124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Mar 17 18:55:58.245126 env[1374]: time="2025-03-17T18:55:58.245028006Z" level=info msg="CreateContainer within sandbox \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:55:58.251073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493828438.mount: Deactivated successfully. Mar 17 18:55:58.265265 kubelet[2365]: E0317 18:55:58.265245 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.265265 kubelet[2365]: W0317 18:55:58.265257 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265267 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265357 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.266126 kubelet[2365]: W0317 18:55:58.265362 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265367 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265448 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.266126 kubelet[2365]: W0317 18:55:58.265452 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265457 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265549 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.266126 kubelet[2365]: W0317 18:55:58.265553 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.266126 kubelet[2365]: E0317 18:55:58.265559 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265659 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.267487 kubelet[2365]: W0317 18:55:58.265673 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265678 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265760 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.267487 kubelet[2365]: W0317 18:55:58.265764 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265769 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265865 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.267487 kubelet[2365]: W0317 18:55:58.265870 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265874 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.267487 kubelet[2365]: E0317 18:55:58.265958 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268195 env[1374]: time="2025-03-17T18:55:58.266443363Z" level=info msg="CreateContainer within sandbox \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76\"" Mar 17 18:55:58.268195 env[1374]: time="2025-03-17T18:55:58.267089037Z" level=info msg="StartContainer for \"c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76\"" Mar 17 18:55:58.268751 kubelet[2365]: W0317 18:55:58.265962 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268751 kubelet[2365]: E0317 18:55:58.265967 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.268751 kubelet[2365]: E0317 18:55:58.266109 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268751 kubelet[2365]: W0317 18:55:58.266124 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268751 kubelet[2365]: E0317 18:55:58.266129 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.268751 kubelet[2365]: E0317 18:55:58.266219 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268751 kubelet[2365]: W0317 18:55:58.266224 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268751 kubelet[2365]: E0317 18:55:58.266229 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.268751 kubelet[2365]: E0317 18:55:58.266318 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268751 kubelet[2365]: W0317 18:55:58.266322 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266327 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266441 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268954 kubelet[2365]: W0317 18:55:58.266445 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266450 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266799 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268954 kubelet[2365]: W0317 18:55:58.266822 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266829 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266904 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.268954 kubelet[2365]: W0317 18:55:58.266909 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.268954 kubelet[2365]: E0317 18:55:58.266914 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.269130 kubelet[2365]: E0317 18:55:58.266983 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.269130 kubelet[2365]: W0317 18:55:58.266989 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.269130 kubelet[2365]: E0317 18:55:58.266993 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.273152 kubelet[2365]: E0317 18:55:58.273139 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.273152 kubelet[2365]: W0317 18:55:58.273150 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.273234 kubelet[2365]: E0317 18:55:58.273161 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.273259 kubelet[2365]: E0317 18:55:58.273255 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.273281 kubelet[2365]: W0317 18:55:58.273259 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.273281 kubelet[2365]: E0317 18:55:58.273265 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.273392 kubelet[2365]: E0317 18:55:58.273340 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.273392 kubelet[2365]: W0317 18:55:58.273346 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.273392 kubelet[2365]: E0317 18:55:58.273351 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274172 kubelet[2365]: E0317 18:55:58.274130 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274172 kubelet[2365]: W0317 18:55:58.274142 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274172 kubelet[2365]: E0317 18:55:58.274154 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274258 kubelet[2365]: E0317 18:55:58.274240 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274258 kubelet[2365]: W0317 18:55:58.274246 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274258 kubelet[2365]: E0317 18:55:58.274254 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274339 kubelet[2365]: E0317 18:55:58.274330 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274339 kubelet[2365]: W0317 18:55:58.274336 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274394 kubelet[2365]: E0317 18:55:58.274341 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274419 kubelet[2365]: E0317 18:55:58.274412 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274419 kubelet[2365]: W0317 18:55:58.274417 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274459 kubelet[2365]: E0317 18:55:58.274421 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274514 kubelet[2365]: E0317 18:55:58.274506 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274514 kubelet[2365]: W0317 18:55:58.274512 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274573 kubelet[2365]: E0317 18:55:58.274546 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274723 kubelet[2365]: E0317 18:55:58.274714 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274723 kubelet[2365]: W0317 18:55:58.274721 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274832 kubelet[2365]: E0317 18:55:58.274782 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274832 kubelet[2365]: E0317 18:55:58.274798 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274832 kubelet[2365]: W0317 18:55:58.274802 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274832 kubelet[2365]: E0317 18:55:58.274809 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.274912 kubelet[2365]: E0317 18:55:58.274883 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.274912 kubelet[2365]: W0317 18:55:58.274888 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.274912 kubelet[2365]: E0317 18:55:58.274892 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275152 kubelet[2365]: E0317 18:55:58.274978 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275152 kubelet[2365]: W0317 18:55:58.274982 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275152 kubelet[2365]: E0317 18:55:58.274988 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275236 kubelet[2365]: E0317 18:55:58.275228 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275236 kubelet[2365]: W0317 18:55:58.275234 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275295 kubelet[2365]: E0317 18:55:58.275245 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275333 kubelet[2365]: E0317 18:55:58.275326 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275333 kubelet[2365]: W0317 18:55:58.275332 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275385 kubelet[2365]: E0317 18:55:58.275339 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275429 kubelet[2365]: E0317 18:55:58.275422 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275429 kubelet[2365]: W0317 18:55:58.275427 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275489 kubelet[2365]: E0317 18:55:58.275432 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275511 kubelet[2365]: E0317 18:55:58.275500 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275511 kubelet[2365]: W0317 18:55:58.275504 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275511 kubelet[2365]: E0317 18:55:58.275508 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275596 kubelet[2365]: E0317 18:55:58.275588 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275596 kubelet[2365]: W0317 18:55:58.275593 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275646 kubelet[2365]: E0317 18:55:58.275598 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.275746 kubelet[2365]: E0317 18:55:58.275737 2365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:55:58.275746 kubelet[2365]: W0317 18:55:58.275742 2365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:55:58.275746 kubelet[2365]: E0317 18:55:58.275747 2365 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:55:58.318128 env[1374]: time="2025-03-17T18:55:58.318100780Z" level=info msg="StartContainer for \"c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76\" returns successfully" Mar 17 18:55:58.371177 env[1374]: time="2025-03-17T18:55:58.371147847Z" level=info msg="shim disconnected" id=c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76 Mar 17 18:55:58.371177 env[1374]: time="2025-03-17T18:55:58.371176254Z" level=warning msg="cleaning up after shim disconnected" id=c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76 namespace=k8s.io Mar 17 18:55:58.371314 env[1374]: time="2025-03-17T18:55:58.371185641Z" level=info msg="cleaning up dead shim" Mar 17 18:55:58.375771 env[1374]: time="2025-03-17T18:55:58.375747290Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:55:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2998 runtime=io.containerd.runc.v2\n" Mar 17 18:55:58.536857 systemd[1]: run-containerd-runc-k8s.io-c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76-runc.IP68v5.mount: Deactivated successfully. Mar 17 18:55:58.536946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c08646f25fe9c13aefa43aac7b0f8bf9b98fcf5a190b0637ef6910b2607a5f76-rootfs.mount: Deactivated successfully. Mar 17 18:55:59.112476 kubelet[2365]: E0317 18:55:59.112441 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:55:59.185416 env[1374]: time="2025-03-17T18:55:59.185396681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Mar 17 18:56:01.111528 kubelet[2365]: E0317 18:56:01.111499 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:56:03.111776 kubelet[2365]: E0317 18:56:03.111747 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:56:03.971054 env[1374]: time="2025-03-17T18:56:03.971017761Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:03.972221 env[1374]: time="2025-03-17T18:56:03.972201809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:03.973232 env[1374]: time="2025-03-17T18:56:03.973216442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:03.974140 env[1374]: time="2025-03-17T18:56:03.974124613Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:03.974417 env[1374]: time="2025-03-17T18:56:03.974397459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Mar 17 18:56:03.977956 env[1374]: time="2025-03-17T18:56:03.977835709Z" level=info msg="CreateContainer within sandbox \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:56:03.984465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380269659.mount: Deactivated successfully. Mar 17 18:56:04.007263 env[1374]: time="2025-03-17T18:56:04.007171038Z" level=info msg="CreateContainer within sandbox \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2c87dfefe6ffbcc3b42caae25c80660522e72a5317537ee24e6b031f70e6f00f\"" Mar 17 18:56:04.008370 env[1374]: time="2025-03-17T18:56:04.008256331Z" level=info msg="StartContainer for \"2c87dfefe6ffbcc3b42caae25c80660522e72a5317537ee24e6b031f70e6f00f\"" Mar 17 18:56:04.081405 env[1374]: time="2025-03-17T18:56:04.081373851Z" level=info msg="StartContainer for \"2c87dfefe6ffbcc3b42caae25c80660522e72a5317537ee24e6b031f70e6f00f\" returns successfully" Mar 17 18:56:05.112442 kubelet[2365]: E0317 18:56:05.112407 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:56:05.212128 env[1374]: time="2025-03-17T18:56:05.212020673Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:56:05.232944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c87dfefe6ffbcc3b42caae25c80660522e72a5317537ee24e6b031f70e6f00f-rootfs.mount: Deactivated successfully. Mar 17 18:56:05.236172 env[1374]: time="2025-03-17T18:56:05.236143143Z" level=info msg="shim disconnected" id=2c87dfefe6ffbcc3b42caae25c80660522e72a5317537ee24e6b031f70e6f00f Mar 17 18:56:05.236282 env[1374]: time="2025-03-17T18:56:05.236270580Z" level=warning msg="cleaning up after shim disconnected" id=2c87dfefe6ffbcc3b42caae25c80660522e72a5317537ee24e6b031f70e6f00f namespace=k8s.io Mar 17 18:56:05.236328 env[1374]: time="2025-03-17T18:56:05.236318512Z" level=info msg="cleaning up dead shim" Mar 17 18:56:05.241466 env[1374]: time="2025-03-17T18:56:05.241448202Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:56:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3067 runtime=io.containerd.runc.v2\n" Mar 17 18:56:05.286868 kubelet[2365]: I0317 18:56:05.286802 2365 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:56:05.310216 kubelet[2365]: I0317 18:56:05.308701 2365 topology_manager.go:215] "Topology Admit Handler" podUID="1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e" podNamespace="calico-system" podName="calico-kube-controllers-5b484f5775-2np4h" Mar 17 18:56:05.316523 kubelet[2365]: I0317 18:56:05.316500 2365 topology_manager.go:215] "Topology Admit Handler" podUID="14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pnp4k" Mar 17 18:56:05.316603 kubelet[2365]: I0317 18:56:05.316582 2365 topology_manager.go:215] "Topology Admit Handler" podUID="885d0d3f-49ac-45f6-83b5-96856df9e4b0" podNamespace="calico-apiserver" podName="calico-apiserver-587b6b84cf-nvpll" Mar 17 18:56:05.316825 kubelet[2365]: I0317 18:56:05.316645 2365 topology_manager.go:215] "Topology Admit Handler" podUID="82369388-1126-47fe-9068-8215d11684f0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zzgcp" Mar 17 18:56:05.316825 kubelet[2365]: I0317 18:56:05.316714 2365 topology_manager.go:215] "Topology Admit Handler" podUID="5a66bb45-5aee-4882-aac7-d77a88647401" podNamespace="calico-apiserver" podName="calico-apiserver-587b6b84cf-tshpw" Mar 17 18:56:05.342566 kubelet[2365]: I0317 18:56:05.342536 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e-tigera-ca-bundle\") pod \"calico-kube-controllers-5b484f5775-2np4h\" (UID: \"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e\") " pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" Mar 17 18:56:05.342676 kubelet[2365]: I0317 18:56:05.342573 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctrg\" (UniqueName: \"kubernetes.io/projected/82369388-1126-47fe-9068-8215d11684f0-kube-api-access-gctrg\") pod \"coredns-7db6d8ff4d-zzgcp\" (UID: \"82369388-1126-47fe-9068-8215d11684f0\") " pod="kube-system/coredns-7db6d8ff4d-zzgcp" Mar 17 18:56:05.342676 kubelet[2365]: I0317 18:56:05.342602 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a66bb45-5aee-4882-aac7-d77a88647401-calico-apiserver-certs\") pod \"calico-apiserver-587b6b84cf-tshpw\" (UID: \"5a66bb45-5aee-4882-aac7-d77a88647401\") " pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" Mar 17 18:56:05.342676 kubelet[2365]: I0317 18:56:05.342621 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04-config-volume\") pod \"coredns-7db6d8ff4d-pnp4k\" (UID: \"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04\") " pod="kube-system/coredns-7db6d8ff4d-pnp4k" Mar 17 18:56:05.342676 kubelet[2365]: I0317 18:56:05.342641 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxb8\" (UniqueName: \"kubernetes.io/projected/885d0d3f-49ac-45f6-83b5-96856df9e4b0-kube-api-access-vgxb8\") pod \"calico-apiserver-587b6b84cf-nvpll\" (UID: \"885d0d3f-49ac-45f6-83b5-96856df9e4b0\") " pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" Mar 17 18:56:05.344871 kubelet[2365]: I0317 18:56:05.342659 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwxld\" (UniqueName: \"kubernetes.io/projected/1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e-kube-api-access-hwxld\") pod \"calico-kube-controllers-5b484f5775-2np4h\" (UID: \"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e\") " pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" Mar 17 18:56:05.344871 kubelet[2365]: I0317 18:56:05.342761 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbjc\" (UniqueName: \"kubernetes.io/projected/14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04-kube-api-access-pvbjc\") pod \"coredns-7db6d8ff4d-pnp4k\" (UID: \"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04\") " pod="kube-system/coredns-7db6d8ff4d-pnp4k" Mar 17 18:56:05.344871 kubelet[2365]: I0317 18:56:05.342780 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/885d0d3f-49ac-45f6-83b5-96856df9e4b0-calico-apiserver-certs\") pod \"calico-apiserver-587b6b84cf-nvpll\" (UID: \"885d0d3f-49ac-45f6-83b5-96856df9e4b0\") " pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" Mar 17 18:56:05.344871 kubelet[2365]: I0317 18:56:05.342805 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqgsh\" (UniqueName: \"kubernetes.io/projected/5a66bb45-5aee-4882-aac7-d77a88647401-kube-api-access-dqgsh\") pod \"calico-apiserver-587b6b84cf-tshpw\" (UID: \"5a66bb45-5aee-4882-aac7-d77a88647401\") " pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" Mar 17 18:56:05.344871 kubelet[2365]: I0317 18:56:05.342823 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82369388-1126-47fe-9068-8215d11684f0-config-volume\") pod \"coredns-7db6d8ff4d-zzgcp\" (UID: \"82369388-1126-47fe-9068-8215d11684f0\") " pod="kube-system/coredns-7db6d8ff4d-zzgcp" Mar 17 18:56:05.632750 env[1374]: time="2025-03-17T18:56:05.632714738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b484f5775-2np4h,Uid:1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e,Namespace:calico-system,Attempt:0,}" Mar 17 18:56:05.635557 env[1374]: time="2025-03-17T18:56:05.635533158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-nvpll,Uid:885d0d3f-49ac-45f6-83b5-96856df9e4b0,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:56:05.636173 env[1374]: time="2025-03-17T18:56:05.635893800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-tshpw,Uid:5a66bb45-5aee-4882-aac7-d77a88647401,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:56:05.640786 env[1374]: time="2025-03-17T18:56:05.640762830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnp4k,Uid:14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04,Namespace:kube-system,Attempt:0,}" Mar 17 18:56:05.640874 env[1374]: time="2025-03-17T18:56:05.640772668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgcp,Uid:82369388-1126-47fe-9068-8215d11684f0,Namespace:kube-system,Attempt:0,}" Mar 17 18:56:05.902180 env[1374]: time="2025-03-17T18:56:05.901983559Z" level=error msg="Failed to destroy network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.902329 env[1374]: time="2025-03-17T18:56:05.902289359Z" level=error msg="encountered an error cleaning up failed sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.902373 env[1374]: time="2025-03-17T18:56:05.902342084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b484f5775-2np4h,Uid:1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.902859 env[1374]: time="2025-03-17T18:56:05.902839888Z" level=error msg="Failed to destroy network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.903335 env[1374]: time="2025-03-17T18:56:05.903312400Z" level=error msg="encountered an error cleaning up failed sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.903620 env[1374]: time="2025-03-17T18:56:05.903352913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnp4k,Uid:14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.903620 env[1374]: time="2025-03-17T18:56:05.903504655Z" level=error msg="Failed to destroy network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.904180 env[1374]: time="2025-03-17T18:56:05.904162035Z" level=error msg="encountered an error cleaning up failed sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.904212 env[1374]: time="2025-03-17T18:56:05.904183982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-tshpw,Uid:5a66bb45-5aee-4882-aac7-d77a88647401,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.907037 kubelet[2365]: E0317 18:56:05.905442 2365 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.908173 env[1374]: time="2025-03-17T18:56:05.908151964Z" level=error msg="Failed to destroy network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.908361 env[1374]: time="2025-03-17T18:56:05.908343260Z" level=error msg="encountered an error cleaning up failed sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.908401 env[1374]: time="2025-03-17T18:56:05.908365980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-nvpll,Uid:885d0d3f-49ac-45f6-83b5-96856df9e4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.908679 kubelet[2365]: E0317 18:56:05.908650 2365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pnp4k" Mar 17 18:56:05.908722 kubelet[2365]: E0317 18:56:05.908684 2365 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pnp4k" Mar 17 18:56:05.908748 kubelet[2365]: E0317 18:56:05.908714 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pnp4k_kube-system(14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pnp4k_kube-system(14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pnp4k" podUID="14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04" Mar 17 18:56:05.909634 kubelet[2365]: E0317 18:56:05.909613 2365 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.909681 kubelet[2365]: E0317 18:56:05.909633 2365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" Mar 17 18:56:05.909681 kubelet[2365]: E0317 18:56:05.909652 2365 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" Mar 17 18:56:05.909750 kubelet[2365]: E0317 18:56:05.909680 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-587b6b84cf-nvpll_calico-apiserver(885d0d3f-49ac-45f6-83b5-96856df9e4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-587b6b84cf-nvpll_calico-apiserver(885d0d3f-49ac-45f6-83b5-96856df9e4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" podUID="885d0d3f-49ac-45f6-83b5-96856df9e4b0" Mar 17 18:56:05.909750 kubelet[2365]: E0317 18:56:05.909700 2365 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.909750 kubelet[2365]: E0317 18:56:05.909712 2365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" Mar 17 18:56:05.909859 kubelet[2365]: E0317 18:56:05.909721 2365 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" Mar 17 18:56:05.909859 kubelet[2365]: E0317 18:56:05.909743 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-587b6b84cf-tshpw_calico-apiserver(5a66bb45-5aee-4882-aac7-d77a88647401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-587b6b84cf-tshpw_calico-apiserver(5a66bb45-5aee-4882-aac7-d77a88647401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" podUID="5a66bb45-5aee-4882-aac7-d77a88647401" Mar 17 18:56:05.910952 kubelet[2365]: E0317 18:56:05.910931 2365 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.910999 kubelet[2365]: E0317 18:56:05.910957 2365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" Mar 17 18:56:05.910999 kubelet[2365]: E0317 18:56:05.910968 2365 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" Mar 17 18:56:05.911061 kubelet[2365]: E0317 18:56:05.910988 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b484f5775-2np4h_calico-system(1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b484f5775-2np4h_calico-system(1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" podUID="1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e" Mar 17 18:56:05.912019 env[1374]: time="2025-03-17T18:56:05.911949614Z" level=error msg="Failed to destroy network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.913137 env[1374]: time="2025-03-17T18:56:05.912740316Z" level=error msg="encountered an error cleaning up failed sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.913137 env[1374]: time="2025-03-17T18:56:05.913077678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgcp,Uid:82369388-1126-47fe-9068-8215d11684f0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.913627 kubelet[2365]: E0317 18:56:05.913608 2365 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:05.913679 kubelet[2365]: E0317 18:56:05.913631 2365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zzgcp" Mar 17 18:56:05.913679 kubelet[2365]: E0317 18:56:05.913649 2365 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zzgcp" Mar 17 18:56:05.913741 kubelet[2365]: E0317 18:56:05.913724 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zzgcp_kube-system(82369388-1126-47fe-9068-8215d11684f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zzgcp_kube-system(82369388-1126-47fe-9068-8215d11684f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zzgcp" podUID="82369388-1126-47fe-9068-8215d11684f0" Mar 17 18:56:06.214339 kubelet[2365]: I0317 18:56:06.212930 2365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:06.216391 kubelet[2365]: I0317 18:56:06.216373 2365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:06.218678 kubelet[2365]: I0317 18:56:06.218651 2365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:06.222494 env[1374]: time="2025-03-17T18:56:06.222475620Z" level=info msg="StopPodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\"" Mar 17 18:56:06.223001 kubelet[2365]: I0317 18:56:06.222935 2365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:06.229214 env[1374]: time="2025-03-17T18:56:06.224753605Z" level=info msg="StopPodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\"" Mar 17 18:56:06.229214 env[1374]: time="2025-03-17T18:56:06.222507174Z" level=info msg="StopPodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\"" Mar 17 18:56:06.229214 env[1374]: time="2025-03-17T18:56:06.222523015Z" level=info msg="StopPodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\"" Mar 17 18:56:06.229539 env[1374]: time="2025-03-17T18:56:06.229522541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Mar 17 18:56:06.239240 kubelet[2365]: I0317 18:56:06.238775 2365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:06.243257 env[1374]: time="2025-03-17T18:56:06.243232367Z" level=info msg="StopPodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\"" Mar 17 18:56:06.261670 env[1374]: time="2025-03-17T18:56:06.261630493Z" level=error msg="StopPodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" failed" error="failed to destroy network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:06.261808 kubelet[2365]: E0317 18:56:06.261782 2365 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:06.261864 kubelet[2365]: E0317 18:56:06.261821 2365 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e"} Mar 17 18:56:06.261893 kubelet[2365]: E0317 18:56:06.261859 2365 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a66bb45-5aee-4882-aac7-d77a88647401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:56:06.261893 kubelet[2365]: E0317 18:56:06.261876 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a66bb45-5aee-4882-aac7-d77a88647401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" podUID="5a66bb45-5aee-4882-aac7-d77a88647401" Mar 17 18:56:06.271071 env[1374]: time="2025-03-17T18:56:06.271038109Z" level=error msg="StopPodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" failed" error="failed to destroy network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:06.271334 kubelet[2365]: E0317 18:56:06.271309 2365 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:06.271393 kubelet[2365]: E0317 18:56:06.271343 2365 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300"} Mar 17 18:56:06.271393 kubelet[2365]: E0317 18:56:06.271369 2365 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82369388-1126-47fe-9068-8215d11684f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:56:06.271393 kubelet[2365]: E0317 18:56:06.271386 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82369388-1126-47fe-9068-8215d11684f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zzgcp" podUID="82369388-1126-47fe-9068-8215d11684f0" Mar 17 18:56:06.289622 env[1374]: time="2025-03-17T18:56:06.289578708Z" level=error msg="StopPodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" failed" error="failed to destroy network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:06.289850 env[1374]: time="2025-03-17T18:56:06.289716882Z" level=error msg="StopPodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" failed" error="failed to destroy network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:06.289889 kubelet[2365]: E0317 18:56:06.289826 2365 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:06.289889 kubelet[2365]: E0317 18:56:06.289857 2365 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b"} Mar 17 18:56:06.289889 kubelet[2365]: E0317 18:56:06.289878 2365 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"885d0d3f-49ac-45f6-83b5-96856df9e4b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:56:06.289996 kubelet[2365]: E0317 18:56:06.289893 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"885d0d3f-49ac-45f6-83b5-96856df9e4b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" podUID="885d0d3f-49ac-45f6-83b5-96856df9e4b0" Mar 17 18:56:06.289996 kubelet[2365]: E0317 18:56:06.289829 2365 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:06.289996 kubelet[2365]: E0317 18:56:06.289914 2365 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5"} Mar 17 18:56:06.289996 kubelet[2365]: E0317 18:56:06.289927 2365 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:56:06.290116 kubelet[2365]: E0317 18:56:06.289942 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pnp4k" podUID="14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04" Mar 17 18:56:06.290559 env[1374]: time="2025-03-17T18:56:06.290535007Z" level=error msg="StopPodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" failed" error="failed to destroy network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:06.290623 kubelet[2365]: E0317 18:56:06.290609 2365 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:06.290941 kubelet[2365]: E0317 18:56:06.290625 2365 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126"} Mar 17 18:56:06.290941 kubelet[2365]: E0317 18:56:06.290637 2365 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:56:06.290941 kubelet[2365]: E0317 18:56:06.290648 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" podUID="1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e" Mar 17 18:56:07.113284 env[1374]: time="2025-03-17T18:56:07.113259384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xnh4,Uid:1165f5ec-1445-4386-b540-a9b8a16322f3,Namespace:calico-system,Attempt:0,}" Mar 17 18:56:07.148659 env[1374]: time="2025-03-17T18:56:07.148624893Z" level=error msg="Failed to destroy network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:07.150234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a-shm.mount: Deactivated successfully. Mar 17 18:56:07.150618 env[1374]: time="2025-03-17T18:56:07.150596629Z" level=error msg="encountered an error cleaning up failed sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:07.150720 env[1374]: time="2025-03-17T18:56:07.150702751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xnh4,Uid:1165f5ec-1445-4386-b540-a9b8a16322f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:07.151108 kubelet[2365]: E0317 18:56:07.150889 2365 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:07.151108 kubelet[2365]: E0317 18:56:07.150924 2365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:56:07.151108 kubelet[2365]: E0317 18:56:07.150937 2365 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9xnh4" Mar 17 18:56:07.151922 kubelet[2365]: E0317 18:56:07.150964 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9xnh4_calico-system(1165f5ec-1445-4386-b540-a9b8a16322f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9xnh4_calico-system(1165f5ec-1445-4386-b540-a9b8a16322f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:56:07.240072 kubelet[2365]: I0317 18:56:07.240051 2365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:07.240486 env[1374]: time="2025-03-17T18:56:07.240470628Z" level=info msg="StopPodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\"" Mar 17 18:56:07.257459 env[1374]: time="2025-03-17T18:56:07.257418517Z" level=error msg="StopPodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" failed" error="failed to destroy network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:56:07.257567 kubelet[2365]: E0317 18:56:07.257546 2365 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:07.257607 kubelet[2365]: E0317 18:56:07.257583 2365 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a"} Mar 17 18:56:07.257635 kubelet[2365]: E0317 18:56:07.257605 2365 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1165f5ec-1445-4386-b540-a9b8a16322f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:56:07.257635 kubelet[2365]: E0317 18:56:07.257618 2365 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1165f5ec-1445-4386-b540-a9b8a16322f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9xnh4" podUID="1165f5ec-1445-4386-b540-a9b8a16322f3" Mar 17 18:56:14.862378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3172772204.mount: Deactivated successfully. Mar 17 18:56:14.896433 env[1374]: time="2025-03-17T18:56:14.896402313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:14.917954 env[1374]: time="2025-03-17T18:56:14.917938945Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:15.105926 env[1374]: time="2025-03-17T18:56:15.105893639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:15.106956 env[1374]: time="2025-03-17T18:56:15.106937103Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:15.107365 env[1374]: time="2025-03-17T18:56:15.107345920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Mar 17 18:56:15.131052 env[1374]: time="2025-03-17T18:56:15.130832118Z" level=info msg="CreateContainer within sandbox \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:56:15.146530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497220592.mount: Deactivated successfully. Mar 17 18:56:15.148102 env[1374]: time="2025-03-17T18:56:15.148082036Z" level=info msg="CreateContainer within sandbox \"7e8734cc11ea517438c33d9a19f3acef4e76a8aeeaff79a59991c23df1c9b3bb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc402cd8f0d3eb00deac3671dd803fd3f25210ced477678b0696d88d56c67c12\"" Mar 17 18:56:15.149333 env[1374]: time="2025-03-17T18:56:15.149319357Z" level=info msg="StartContainer for \"bc402cd8f0d3eb00deac3671dd803fd3f25210ced477678b0696d88d56c67c12\"" Mar 17 18:56:15.184213 env[1374]: time="2025-03-17T18:56:15.184191981Z" level=info msg="StartContainer for \"bc402cd8f0d3eb00deac3671dd803fd3f25210ced477678b0696d88d56c67c12\" returns successfully" Mar 17 18:56:16.318024 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 18:56:16.318110 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 18:56:16.346378 kubelet[2365]: I0317 18:56:16.346358 2365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:56:16.371355 systemd[1]: run-containerd-runc-k8s.io-bc402cd8f0d3eb00deac3671dd803fd3f25210ced477678b0696d88d56c67c12-runc.saZCYg.mount: Deactivated successfully. Mar 17 18:56:16.863380 systemd[1]: run-containerd-runc-k8s.io-bc402cd8f0d3eb00deac3671dd803fd3f25210ced477678b0696d88d56c67c12-runc.gCke4w.mount: Deactivated successfully. Mar 17 18:56:17.620957 kernel: kauditd_printk_skb: 8 callbacks suppressed Mar 17 18:56:17.625931 kernel: audit: type=1400 audit(1742237777.616:290): avc: denied { write } for pid=3572 comm="tee" name="fd" dev="proc" ino=36959 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.626842 kernel: audit: type=1300 audit(1742237777.616:290): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff992ea2a a2=241 a3=1b6 items=1 ppid=3520 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.616000 audit[3572]: AVC avc: denied { write } for pid=3572 comm="tee" name="fd" dev="proc" ino=36959 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.633731 kernel: audit: type=1307 audit(1742237777.616:290): cwd="/etc/service/enabled/felix/log" Mar 17 18:56:17.633772 kernel: audit: type=1302 audit(1742237777.616:290): item=0 name="/dev/fd/63" inode=36941 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.616000 audit[3572]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff992ea2a a2=241 a3=1b6 items=1 ppid=3520 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.646102 kernel: audit: type=1327 audit(1742237777.616:290): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.646154 kernel: audit: type=1400 audit(1742237777.628:291): avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=36965 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.616000 audit: CWD cwd="/etc/service/enabled/felix/log" Mar 17 18:56:17.616000 audit: PATH item=0 name="/dev/fd/63" inode=36941 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.651344 kernel: audit: type=1300 audit(1742237777.628:291): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffde134a1a a2=241 a3=1b6 items=1 ppid=3515 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.652685 kernel: audit: type=1307 audit(1742237777.628:291): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Mar 17 18:56:17.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.628000 audit[3574]: AVC avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=36965 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.628000 audit[3574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffde134a1a a2=241 a3=1b6 items=1 ppid=3515 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.628000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Mar 17 18:56:17.628000 audit: PATH item=0 name="/dev/fd/63" inode=36944 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.656507 kernel: audit: type=1302 audit(1742237777.628:291): item=0 name="/dev/fd/63" inode=36944 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.644000 audit[3581]: AVC avc: denied { write } for pid=3581 comm="tee" name="fd" dev="proc" ino=35927 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.644000 audit[3581]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe42fffa2b a2=241 a3=1b6 items=1 ppid=3534 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.644000 audit: CWD cwd="/etc/service/enabled/bird/log" Mar 17 18:56:17.644000 audit: PATH item=0 name="/dev/fd/63" inode=36954 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.646000 audit[3557]: AVC avc: denied { write } for pid=3557 comm="tee" name="fd" dev="proc" ino=35931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.663888 kernel: audit: type=1327 audit(1742237777.628:291): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.646000 audit[3557]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd9f612a2c a2=241 a3=1b6 items=1 ppid=3526 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.646000 audit: CWD cwd="/etc/service/enabled/cni/log" Mar 17 18:56:17.646000 audit: PATH item=0 name="/dev/fd/63" inode=35918 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.647000 audit[3570]: AVC avc: denied { write } for pid=3570 comm="tee" name="fd" dev="proc" ino=35935 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.647000 audit[3570]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd23265a2a a2=241 a3=1b6 items=1 ppid=3519 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.647000 audit: CWD cwd="/etc/service/enabled/bird6/log" Mar 17 18:56:17.647000 audit: PATH item=0 name="/dev/fd/63" inode=36937 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.660000 audit[3586]: AVC avc: denied { write } for pid=3586 comm="tee" name="fd" dev="proc" ino=35939 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.660000 audit[3586]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff92c33a1b a2=241 a3=1b6 items=1 ppid=3514 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.660000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Mar 17 18:56:17.660000 audit: PATH item=0 name="/dev/fd/63" inode=35923 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:17.685000 audit[3588]: AVC avc: denied { write } for pid=3588 comm="tee" name="fd" dev="proc" ino=35943 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:56:17.685000 audit[3588]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde9011a2a a2=241 a3=1b6 items=1 ppid=3521 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:17.685000 audit: CWD cwd="/etc/service/enabled/confd/log" Mar 17 18:56:17.685000 audit: PATH item=0 name="/dev/fd/63" inode=35922 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:56:17.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:56:18.115160 env[1374]: time="2025-03-17T18:56:18.115100455Z" level=info msg="StopPodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\"" Mar 17 18:56:18.182109 kubelet[2365]: I0317 18:56:18.179621 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cqsxd" podStartSLOduration=3.955706085 podStartE2EDuration="25.1736576s" podCreationTimestamp="2025-03-17 18:55:53 +0000 UTC" firstStartedPulling="2025-03-17 18:55:53.890259853 +0000 UTC m=+21.905804624" lastFinishedPulling="2025-03-17 18:56:15.10821136 +0000 UTC m=+43.123756139" observedRunningTime="2025-03-17 18:56:15.272421783 +0000 UTC m=+43.287966562" watchObservedRunningTime="2025-03-17 18:56:18.1736576 +0000 UTC m=+46.189202374" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.172 [INFO][3611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.173 [INFO][3611] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" iface="eth0" netns="/var/run/netns/cni-c5068665-5cf0-5ce0-08b5-658148ee5944" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.173 [INFO][3611] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" iface="eth0" netns="/var/run/netns/cni-c5068665-5cf0-5ce0-08b5-658148ee5944" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.176 [INFO][3611] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" iface="eth0" netns="/var/run/netns/cni-c5068665-5cf0-5ce0-08b5-658148ee5944" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.176 [INFO][3611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.176 [INFO][3611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.701 [INFO][3618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.704 [INFO][3618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.704 [INFO][3618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.716 [WARNING][3618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.716 [INFO][3618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.716 [INFO][3618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:18.719190 env[1374]: 2025-03-17 18:56:18.717 [INFO][3611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:18.721929 env[1374]: time="2025-03-17T18:56:18.721705144Z" level=info msg="TearDown network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" successfully" Mar 17 18:56:18.721929 env[1374]: time="2025-03-17T18:56:18.721732576Z" level=info msg="StopPodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" returns successfully" Mar 17 18:56:18.721007 systemd[1]: run-netns-cni\x2dc5068665\x2d5cf0\x2d5ce0\x2d08b5\x2d658148ee5944.mount: Deactivated successfully. Mar 17 18:56:18.723869 env[1374]: time="2025-03-17T18:56:18.722338029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgcp,Uid:82369388-1126-47fe-9068-8215d11684f0,Namespace:kube-system,Attempt:1,}" Mar 17 18:56:18.838821 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:56:18.838978 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali56d1bffecf8: link becomes ready Mar 17 18:56:18.848941 systemd-networkd[1121]: cali56d1bffecf8: Link UP Mar 17 18:56:18.849372 systemd-networkd[1121]: cali56d1bffecf8: Gained carrier Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.773 [INFO][3645] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.780 [INFO][3645] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0 coredns-7db6d8ff4d- kube-system 82369388-1126-47fe-9068-8215d11684f0 752 0 2025-03-17 18:55:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-zzgcp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali56d1bffecf8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.780 [INFO][3645] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.801 [INFO][3657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" HandleID="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.810 [INFO][3657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" HandleID="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336c80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-zzgcp", "timestamp":"2025-03-17 18:56:18.801116645 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.810 [INFO][3657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.810 [INFO][3657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.810 [INFO][3657] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.811 [INFO][3657] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.816 [INFO][3657] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.818 [INFO][3657] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.818 [INFO][3657] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.819 [INFO][3657] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.819 [INFO][3657] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.820 [INFO][3657] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.822 [INFO][3657] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.826 [INFO][3657] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.826 [INFO][3657] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" host="localhost" Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.826 [INFO][3657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:18.850303 env[1374]: 2025-03-17 18:56:18.826 [INFO][3657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" HandleID="k8s-pod-network.f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.854197 env[1374]: 2025-03-17 18:56:18.828 [INFO][3645] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82369388-1126-47fe-9068-8215d11684f0", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-zzgcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56d1bffecf8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:18.854197 env[1374]: 2025-03-17 18:56:18.828 [INFO][3645] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.854197 env[1374]: 2025-03-17 18:56:18.828 [INFO][3645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56d1bffecf8 ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.854197 env[1374]: 2025-03-17 18:56:18.839 [INFO][3645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.854197 env[1374]: 2025-03-17 18:56:18.839 [INFO][3645] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82369388-1126-47fe-9068-8215d11684f0", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a", Pod:"coredns-7db6d8ff4d-zzgcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56d1bffecf8", MAC:"c2:4a:72:35:a9:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:18.854197 env[1374]: 2025-03-17 18:56:18.845 [INFO][3645] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zzgcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:18.865762 env[1374]: time="2025-03-17T18:56:18.865655006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:18.865762 env[1374]: time="2025-03-17T18:56:18.865741489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:18.865891 env[1374]: time="2025-03-17T18:56:18.865749591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:18.866166 env[1374]: time="2025-03-17T18:56:18.866140653Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a pid=3682 runtime=io.containerd.runc.v2 Mar 17 18:56:18.884256 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:56:18.907078 env[1374]: time="2025-03-17T18:56:18.907054481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zzgcp,Uid:82369388-1126-47fe-9068-8215d11684f0,Namespace:kube-system,Attempt:1,} returns sandbox id \"f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a\"" Mar 17 18:56:18.947275 env[1374]: time="2025-03-17T18:56:18.947237036Z" level=info msg="CreateContainer within sandbox \"f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:56:19.069692 env[1374]: time="2025-03-17T18:56:19.069660453Z" level=info msg="CreateContainer within sandbox \"f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c13110c49ed3eaa48ad48e616409cd4af9ea9399bc2aef03a4578be91406319e\"" Mar 17 18:56:19.070100 env[1374]: time="2025-03-17T18:56:19.070085664Z" level=info msg="StartContainer for \"c13110c49ed3eaa48ad48e616409cd4af9ea9399bc2aef03a4578be91406319e\"" Mar 17 18:56:19.112407 env[1374]: time="2025-03-17T18:56:19.112380904Z" level=info msg="StopPodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\"" Mar 17 18:56:19.180093 env[1374]: time="2025-03-17T18:56:19.180064899Z" level=info msg="StartContainer for \"c13110c49ed3eaa48ad48e616409cd4af9ea9399bc2aef03a4578be91406319e\" returns successfully" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.178 [INFO][3755] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.178 [INFO][3755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" iface="eth0" netns="/var/run/netns/cni-564a3804-f719-eb8b-6756-0fdb2641ca7d" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.178 [INFO][3755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" iface="eth0" netns="/var/run/netns/cni-564a3804-f719-eb8b-6756-0fdb2641ca7d" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.178 [INFO][3755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" iface="eth0" netns="/var/run/netns/cni-564a3804-f719-eb8b-6756-0fdb2641ca7d" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.178 [INFO][3755] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.178 [INFO][3755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.192 [INFO][3768] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.192 [INFO][3768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.192 [INFO][3768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.220 [WARNING][3768] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.220 [INFO][3768] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.234 [INFO][3768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:19.237016 env[1374]: 2025-03-17 18:56:19.235 [INFO][3755] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:19.244212 env[1374]: time="2025-03-17T18:56:19.237143670Z" level=info msg="TearDown network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" successfully" Mar 17 18:56:19.244212 env[1374]: time="2025-03-17T18:56:19.237167645Z" level=info msg="StopPodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" returns successfully" Mar 17 18:56:19.244212 env[1374]: time="2025-03-17T18:56:19.237634996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b484f5775-2np4h,Uid:1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e,Namespace:calico-system,Attempt:1,}" Mar 17 18:56:19.404701 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid64bdfa6a82: link becomes ready Mar 17 18:56:19.402953 systemd-networkd[1121]: calid64bdfa6a82: Link UP Mar 17 18:56:19.404607 systemd-networkd[1121]: calid64bdfa6a82: Gained carrier Mar 17 18:56:19.416933 kubelet[2365]: I0317 18:56:19.416782 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zzgcp" podStartSLOduration=33.416768229 podStartE2EDuration="33.416768229s" podCreationTimestamp="2025-03-17 18:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:56:19.328243668 +0000 UTC m=+47.343788440" watchObservedRunningTime="2025-03-17 18:56:19.416768229 +0000 UTC m=+47.432313002" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.321 [INFO][3775] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.348 [INFO][3775] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0 calico-kube-controllers-5b484f5775- calico-system 1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e 760 0 2025-03-17 18:55:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b484f5775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b484f5775-2np4h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid64bdfa6a82 [] []}} ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.348 [INFO][3775] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.370 [INFO][3786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" HandleID="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.376 [INFO][3786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" HandleID="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b484f5775-2np4h", "timestamp":"2025-03-17 18:56:19.370860099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.376 [INFO][3786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.376 [INFO][3786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.376 [INFO][3786] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.376 [INFO][3786] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.379 [INFO][3786] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.381 [INFO][3786] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.382 [INFO][3786] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.382 [INFO][3786] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.383 [INFO][3786] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.383 [INFO][3786] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.387 [INFO][3786] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.399 [INFO][3786] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.399 [INFO][3786] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" host="localhost" Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.399 [INFO][3786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:19.418691 env[1374]: 2025-03-17 18:56:19.399 [INFO][3786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" HandleID="k8s-pod-network.53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.421592 env[1374]: 2025-03-17 18:56:19.401 [INFO][3775] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0", GenerateName:"calico-kube-controllers-5b484f5775-", Namespace:"calico-system", SelfLink:"", UID:"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b484f5775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b484f5775-2np4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid64bdfa6a82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:19.421592 env[1374]: 2025-03-17 18:56:19.401 [INFO][3775] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.421592 env[1374]: 2025-03-17 18:56:19.401 [INFO][3775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid64bdfa6a82 ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.421592 env[1374]: 2025-03-17 18:56:19.405 [INFO][3775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.421592 env[1374]: 2025-03-17 18:56:19.406 [INFO][3775] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0", GenerateName:"calico-kube-controllers-5b484f5775-", Namespace:"calico-system", SelfLink:"", UID:"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b484f5775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a", Pod:"calico-kube-controllers-5b484f5775-2np4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid64bdfa6a82", MAC:"d2:af:8d:ce:eb:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:19.421592 env[1374]: 2025-03-17 18:56:19.417 [INFO][3775] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a" Namespace="calico-system" Pod="calico-kube-controllers-5b484f5775-2np4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:19.433063 env[1374]: time="2025-03-17T18:56:19.433010490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:19.433154 env[1374]: time="2025-03-17T18:56:19.433050277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:19.433154 env[1374]: time="2025-03-17T18:56:19.433059059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:19.433294 env[1374]: time="2025-03-17T18:56:19.433240566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a pid=3810 runtime=io.containerd.runc.v2 Mar 17 18:56:19.452775 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:56:19.484484 env[1374]: time="2025-03-17T18:56:19.484453998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b484f5775-2np4h,Uid:1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e,Namespace:calico-system,Attempt:1,} returns sandbox id \"53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a\"" Mar 17 18:56:19.513699 env[1374]: time="2025-03-17T18:56:19.513675514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Mar 17 18:56:19.554000 audit[3847]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:19.554000 audit[3847]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fff4c3b1eb0 a2=0 a3=7fff4c3b1e9c items=0 ppid=2501 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:19.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:19.557000 audit[3847]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:19.557000 audit[3847]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4c3b1eb0 a2=0 a3=0 items=0 ppid=2501 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:19.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:19.722218 systemd[1]: run-netns-cni\x2d564a3804\x2df719\x2deb8b\x2d6756\x2d0fdb2641ca7d.mount: Deactivated successfully. Mar 17 18:56:20.114115 env[1374]: time="2025-03-17T18:56:20.114089333Z" level=info msg="StopPodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\"" Mar 17 18:56:20.115118 env[1374]: time="2025-03-17T18:56:20.114190800Z" level=info msg="StopPodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\"" Mar 17 18:56:20.115290 env[1374]: time="2025-03-17T18:56:20.114200370Z" level=info msg="StopPodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\"" Mar 17 18:56:20.380000 audit[3946]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:20.380000 audit[3946]: SYSCALL arch=c000003e syscall=46 success=yes exit=4420 a0=3 a1=7ffe5da62ae0 a2=0 a3=7ffe5da62acc items=0 ppid=2501 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:20.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:20.382000 audit[3946]: NETFILTER_CFG table=nat:98 family=2 entries=33 op=nft_register_chain pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:20.382000 audit[3946]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffe5da62ae0 a2=0 a3=7ffe5da62acc items=0 ppid=2501 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:20.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.357 [INFO][3917] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.357 [INFO][3917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" iface="eth0" netns="/var/run/netns/cni-370924ad-73d3-e07a-7cdc-196f17da5108" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.357 [INFO][3917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" iface="eth0" netns="/var/run/netns/cni-370924ad-73d3-e07a-7cdc-196f17da5108" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.358 [INFO][3917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" iface="eth0" netns="/var/run/netns/cni-370924ad-73d3-e07a-7cdc-196f17da5108" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.358 [INFO][3917] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.358 [INFO][3917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.392 [INFO][3936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.393 [INFO][3936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.393 [INFO][3936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.396 [WARNING][3936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.396 [INFO][3936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.397 [INFO][3936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:20.400364 env[1374]: 2025-03-17 18:56:20.399 [INFO][3917] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:20.409817 env[1374]: time="2025-03-17T18:56:20.404083507Z" level=info msg="TearDown network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" successfully" Mar 17 18:56:20.409817 env[1374]: time="2025-03-17T18:56:20.404104411Z" level=info msg="StopPodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" returns successfully" Mar 17 18:56:20.409817 env[1374]: time="2025-03-17T18:56:20.404956054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnp4k,Uid:14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04,Namespace:kube-system,Attempt:1,}" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.369 [INFO][3919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.369 [INFO][3919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" iface="eth0" netns="/var/run/netns/cni-7b04e8d3-3233-551e-ead4-b1ae14742e66" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.371 [INFO][3919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" iface="eth0" netns="/var/run/netns/cni-7b04e8d3-3233-551e-ead4-b1ae14742e66" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.372 [INFO][3919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" iface="eth0" netns="/var/run/netns/cni-7b04e8d3-3233-551e-ead4-b1ae14742e66" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.372 [INFO][3919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.372 [INFO][3919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.397 [INFO][3940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.397 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.397 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.403 [WARNING][3940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.403 [INFO][3940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.404 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:20.409817 env[1374]: 2025-03-17 18:56:20.406 [INFO][3919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:20.409817 env[1374]: time="2025-03-17T18:56:20.408958894Z" level=info msg="TearDown network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" successfully" Mar 17 18:56:20.409817 env[1374]: time="2025-03-17T18:56:20.408976222Z" level=info msg="StopPodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" returns successfully" Mar 17 18:56:20.402602 systemd[1]: run-netns-cni\x2d370924ad\x2d73d3\x2de07a\x2d7cdc\x2d196f17da5108.mount: Deactivated successfully. Mar 17 18:56:20.410879 env[1374]: time="2025-03-17T18:56:20.410273260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-nvpll,Uid:885d0d3f-49ac-45f6-83b5-96856df9e4b0,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:56:20.408855 systemd[1]: run-netns-cni\x2d7b04e8d3\x2d3233\x2d551e\x2dead4\x2db1ae14742e66.mount: Deactivated successfully. Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.389 [INFO][3918] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.389 [INFO][3918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" iface="eth0" netns="/var/run/netns/cni-a04647e0-3834-8d99-a283-414cba2df145" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.389 [INFO][3918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" iface="eth0" netns="/var/run/netns/cni-a04647e0-3834-8d99-a283-414cba2df145" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.389 [INFO][3918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" iface="eth0" netns="/var/run/netns/cni-a04647e0-3834-8d99-a283-414cba2df145" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.389 [INFO][3918] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.389 [INFO][3918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.431 [INFO][3947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.431 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.431 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.435 [WARNING][3947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.435 [INFO][3947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.441 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:20.446192 env[1374]: 2025-03-17 18:56:20.442 [INFO][3918] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:20.446944 env[1374]: time="2025-03-17T18:56:20.446309210Z" level=info msg="TearDown network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" successfully" Mar 17 18:56:20.446944 env[1374]: time="2025-03-17T18:56:20.446328585Z" level=info msg="StopPodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" returns successfully" Mar 17 18:56:20.446944 env[1374]: time="2025-03-17T18:56:20.446924400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-tshpw,Uid:5a66bb45-5aee-4882-aac7-d77a88647401,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:56:20.513333 systemd-networkd[1121]: calibb64b138edc: Link UP Mar 17 18:56:20.516056 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:56:20.516097 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibb64b138edc: link becomes ready Mar 17 18:56:20.515872 systemd-networkd[1121]: calibb64b138edc: Gained carrier Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.454 [INFO][3955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.462 [INFO][3955] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0 calico-apiserver-587b6b84cf- calico-apiserver 885d0d3f-49ac-45f6-83b5-96856df9e4b0 781 0 2025-03-17 18:55:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:587b6b84cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-587b6b84cf-nvpll eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibb64b138edc [] []}} ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.462 [INFO][3955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.491 [INFO][3993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" HandleID="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.497 [INFO][3993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" HandleID="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050d60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-587b6b84cf-nvpll", "timestamp":"2025-03-17 18:56:20.491769837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.497 [INFO][3993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.497 [INFO][3993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.497 [INFO][3993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.498 [INFO][3993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.499 [INFO][3993] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.501 [INFO][3993] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.502 [INFO][3993] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.503 [INFO][3993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.503 [INFO][3993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.504 [INFO][3993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.506 [INFO][3993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.509 [INFO][3993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.509 [INFO][3993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" host="localhost" Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.509 [INFO][3993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:20.530231 env[1374]: 2025-03-17 18:56:20.509 [INFO][3993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" HandleID="k8s-pod-network.be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.531644 env[1374]: 2025-03-17 18:56:20.510 [INFO][3955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"885d0d3f-49ac-45f6-83b5-96856df9e4b0", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-587b6b84cf-nvpll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb64b138edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:20.531644 env[1374]: 2025-03-17 18:56:20.510 [INFO][3955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.531644 env[1374]: 2025-03-17 18:56:20.510 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb64b138edc ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.531644 env[1374]: 2025-03-17 18:56:20.516 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.531644 env[1374]: 2025-03-17 18:56:20.516 [INFO][3955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"885d0d3f-49ac-45f6-83b5-96856df9e4b0", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a", Pod:"calico-apiserver-587b6b84cf-nvpll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb64b138edc", MAC:"36:f2:ac:95:fd:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:20.531644 env[1374]: 2025-03-17 18:56:20.526 [INFO][3955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-nvpll" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:20.555125 systemd-networkd[1121]: cali8c3d9f9999d: Link UP Mar 17 18:56:20.556344 systemd-networkd[1121]: cali8c3d9f9999d: Gained carrier Mar 17 18:56:20.556717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8c3d9f9999d: link becomes ready Mar 17 18:56:20.558820 env[1374]: time="2025-03-17T18:56:20.558789357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:20.558924 env[1374]: time="2025-03-17T18:56:20.558909555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:20.558994 env[1374]: time="2025-03-17T18:56:20.558979729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:20.559146 env[1374]: time="2025-03-17T18:56:20.559131402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a pid=4034 runtime=io.containerd.runc.v2 Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.467 [INFO][3961] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.473 [INFO][3961] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0 coredns-7db6d8ff4d- kube-system 14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04 780 0 2025-03-17 18:55:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-pnp4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c3d9f9999d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.473 [INFO][3961] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.528 [INFO][3999] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" HandleID="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.533 [INFO][3999] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" HandleID="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310c00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-pnp4k", "timestamp":"2025-03-17 18:56:20.528178819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.533 [INFO][3999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.533 [INFO][3999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.534 [INFO][3999] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.534 [INFO][3999] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.536 [INFO][3999] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.538 [INFO][3999] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.539 [INFO][3999] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.540 [INFO][3999] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.540 [INFO][3999] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.541 [INFO][3999] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6 Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.543 [INFO][3999] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.547 [INFO][3999] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.547 [INFO][3999] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" host="localhost" Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.547 [INFO][3999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:20.568521 env[1374]: 2025-03-17 18:56:20.547 [INFO][3999] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" HandleID="k8s-pod-network.8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.569056 env[1374]: 2025-03-17 18:56:20.549 [INFO][3961] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-pnp4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c3d9f9999d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:20.569056 env[1374]: 2025-03-17 18:56:20.549 [INFO][3961] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.569056 env[1374]: 2025-03-17 18:56:20.549 [INFO][3961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c3d9f9999d ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.569056 env[1374]: 2025-03-17 18:56:20.556 [INFO][3961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.569056 env[1374]: 2025-03-17 18:56:20.556 [INFO][3961] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6", Pod:"coredns-7db6d8ff4d-pnp4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c3d9f9999d", MAC:"3e:97:ca:79:fe:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:20.569056 env[1374]: 2025-03-17 18:56:20.565 [INFO][3961] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnp4k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:20.591921 systemd-networkd[1121]: cali12d6eabedab: Link UP Mar 17 18:56:20.593258 systemd-networkd[1121]: cali12d6eabedab: Gained carrier Mar 17 18:56:20.593725 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali12d6eabedab: link becomes ready Mar 17 18:56:20.594900 env[1374]: time="2025-03-17T18:56:20.594862809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:20.594991 env[1374]: time="2025-03-17T18:56:20.594975722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:20.595064 env[1374]: time="2025-03-17T18:56:20.595049840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:20.595216 env[1374]: time="2025-03-17T18:56:20.595200242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6 pid=4074 runtime=io.containerd.runc.v2 Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.485 [INFO][3975] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.492 [INFO][3975] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0 calico-apiserver-587b6b84cf- calico-apiserver 5a66bb45-5aee-4882-aac7-d77a88647401 782 0 2025-03-17 18:55:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:587b6b84cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-587b6b84cf-tshpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12d6eabedab [] []}} ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.492 [INFO][3975] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.544 [INFO][4008] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" HandleID="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.560 [INFO][4008] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" HandleID="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-587b6b84cf-tshpw", "timestamp":"2025-03-17 18:56:20.544292576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.560 [INFO][4008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.560 [INFO][4008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.560 [INFO][4008] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.563 [INFO][4008] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.565 [INFO][4008] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.569 [INFO][4008] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.575 [INFO][4008] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.580 [INFO][4008] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.580 [INFO][4008] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.582 [INFO][4008] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.584 [INFO][4008] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.588 [INFO][4008] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.588 [INFO][4008] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" host="localhost" Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.588 [INFO][4008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:20.604968 env[1374]: 2025-03-17 18:56:20.588 [INFO][4008] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" HandleID="k8s-pod-network.9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.606593 env[1374]: 2025-03-17 18:56:20.589 [INFO][3975] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a66bb45-5aee-4882-aac7-d77a88647401", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-587b6b84cf-tshpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d6eabedab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:20.606593 env[1374]: 2025-03-17 18:56:20.589 [INFO][3975] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.606593 env[1374]: 2025-03-17 18:56:20.590 [INFO][3975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12d6eabedab ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.606593 env[1374]: 2025-03-17 18:56:20.593 [INFO][3975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.606593 env[1374]: 2025-03-17 18:56:20.596 [INFO][3975] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a66bb45-5aee-4882-aac7-d77a88647401", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb", Pod:"calico-apiserver-587b6b84cf-tshpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d6eabedab", MAC:"8e:3a:c2:87:85:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:20.606593 env[1374]: 2025-03-17 18:56:20.602 [INFO][3975] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb" Namespace="calico-apiserver" Pod="calico-apiserver-587b6b84cf-tshpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:20.617070 env[1374]: time="2025-03-17T18:56:20.617028980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:20.617179 env[1374]: time="2025-03-17T18:56:20.617063793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:20.617179 env[1374]: time="2025-03-17T18:56:20.617072843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:20.617179 env[1374]: time="2025-03-17T18:56:20.617151515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb pid=4118 runtime=io.containerd.runc.v2 Mar 17 18:56:20.620306 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:56:20.639810 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:56:20.648036 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:56:20.656013 env[1374]: time="2025-03-17T18:56:20.655992238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnp4k,Uid:14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04,Namespace:kube-system,Attempt:1,} returns sandbox id \"8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6\"" Mar 17 18:56:20.662361 env[1374]: time="2025-03-17T18:56:20.662344349Z" level=info msg="CreateContainer within sandbox \"8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:56:20.669191 env[1374]: time="2025-03-17T18:56:20.669164785Z" level=info msg="CreateContainer within sandbox \"8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87096808a800582f00f23f13af61c7e0eadb8d65aeeab4d27a96486288db0e76\"" Mar 17 18:56:20.669804 env[1374]: time="2025-03-17T18:56:20.669791055Z" level=info msg="StartContainer for \"87096808a800582f00f23f13af61c7e0eadb8d65aeeab4d27a96486288db0e76\"" Mar 17 18:56:20.689895 env[1374]: time="2025-03-17T18:56:20.689872228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-tshpw,Uid:5a66bb45-5aee-4882-aac7-d77a88647401,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb\"" Mar 17 18:56:20.690687 env[1374]: time="2025-03-17T18:56:20.690625626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587b6b84cf-nvpll,Uid:885d0d3f-49ac-45f6-83b5-96856df9e4b0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a\"" Mar 17 18:56:20.713821 env[1374]: time="2025-03-17T18:56:20.713798619Z" level=info msg="StartContainer for \"87096808a800582f00f23f13af61c7e0eadb8d65aeeab4d27a96486288db0e76\" returns successfully" Mar 17 18:56:20.723673 systemd[1]: run-netns-cni\x2da04647e0\x2d3834\x2d8d99\x2da283\x2d414cba2df145.mount: Deactivated successfully. Mar 17 18:56:20.778743 systemd-networkd[1121]: cali56d1bffecf8: Gained IPv6LL Mar 17 18:56:21.280913 kubelet[2365]: I0317 18:56:21.280880 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pnp4k" podStartSLOduration=35.28086852 podStartE2EDuration="35.28086852s" podCreationTimestamp="2025-03-17 18:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:56:21.274258649 +0000 UTC m=+49.289803428" watchObservedRunningTime="2025-03-17 18:56:21.28086852 +0000 UTC m=+49.296413288" Mar 17 18:56:21.291000 audit[4231]: NETFILTER_CFG table=filter:99 family=2 entries=12 op=nft_register_rule pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:21.291000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=4420 a0=3 a1=7ffe97f2ea80 a2=0 a3=7ffe97f2ea6c items=0 ppid=2501 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:21.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:21.309000 audit[4231]: NETFILTER_CFG table=nat:100 family=2 entries=54 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:21.309000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffe97f2ea80 a2=0 a3=7ffe97f2ea6c items=0 ppid=2501 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:21.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:21.353757 systemd-networkd[1121]: calid64bdfa6a82: Gained IPv6LL Mar 17 18:56:21.929808 systemd-networkd[1121]: cali12d6eabedab: Gained IPv6LL Mar 17 18:56:22.113104 env[1374]: time="2025-03-17T18:56:22.113080419Z" level=info msg="StopPodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\"" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.181 [INFO][4269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.184 [INFO][4269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" iface="eth0" netns="/var/run/netns/cni-7fad2ee9-86bf-4b81-8615-7bd6b09be534" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.184 [INFO][4269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" iface="eth0" netns="/var/run/netns/cni-7fad2ee9-86bf-4b81-8615-7bd6b09be534" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.184 [INFO][4269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" iface="eth0" netns="/var/run/netns/cni-7fad2ee9-86bf-4b81-8615-7bd6b09be534" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.184 [INFO][4269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.184 [INFO][4269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.206 [INFO][4275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.207 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.207 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.212 [WARNING][4275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.212 [INFO][4275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.216 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:22.220894 env[1374]: 2025-03-17 18:56:22.219 [INFO][4269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:22.223022 systemd[1]: run-netns-cni\x2d7fad2ee9\x2d86bf\x2d4b81\x2d8615\x2d7bd6b09be534.mount: Deactivated successfully. Mar 17 18:56:22.224765 env[1374]: time="2025-03-17T18:56:22.223954212Z" level=info msg="TearDown network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" successfully" Mar 17 18:56:22.224765 env[1374]: time="2025-03-17T18:56:22.223975042Z" level=info msg="StopPodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" returns successfully" Mar 17 18:56:22.225731 kubelet[2365]: I0317 18:56:22.223912 2365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:56:22.233389 env[1374]: time="2025-03-17T18:56:22.233374092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xnh4,Uid:1165f5ec-1445-4386-b540-a9b8a16322f3,Namespace:calico-system,Attempt:1,}" Mar 17 18:56:22.249771 systemd-networkd[1121]: calibb64b138edc: Gained IPv6LL Mar 17 18:56:22.318000 audit[4295]: NETFILTER_CFG table=filter:101 family=2 entries=12 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:22.318000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=4420 a0=3 a1=7ffe0b1b1170 a2=0 a3=7ffe0b1b115c items=0 ppid=2501 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:22.321000 audit[4295]: NETFILTER_CFG table=nat:102 family=2 entries=18 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:22.321000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffe0b1b1170 a2=0 a3=7ffe0b1b115c items=0 ppid=2501 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.321000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:22.399483 env[1374]: time="2025-03-17T18:56:22.399281802Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:22.401617 env[1374]: time="2025-03-17T18:56:22.401422767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:22.404334 env[1374]: time="2025-03-17T18:56:22.404316245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:22.406144 env[1374]: time="2025-03-17T18:56:22.406130660Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:22.406561 env[1374]: time="2025-03-17T18:56:22.406542703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Mar 17 18:56:22.409908 env[1374]: time="2025-03-17T18:56:22.409881147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:56:22.433345 env[1374]: time="2025-03-17T18:56:22.433027564Z" level=info msg="CreateContainer within sandbox \"53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 18:56:22.440012 env[1374]: time="2025-03-17T18:56:22.439977746Z" level=info msg="CreateContainer within sandbox \"53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99a200267b42c17904941408536c52b8d061e33a976c7fe0f089257cbfa42751\"" Mar 17 18:56:22.441166 env[1374]: time="2025-03-17T18:56:22.440495700Z" level=info msg="StartContainer for \"99a200267b42c17904941408536c52b8d061e33a976c7fe0f089257cbfa42751\"" Mar 17 18:56:22.441788 systemd-networkd[1121]: cali8c3d9f9999d: Gained IPv6LL Mar 17 18:56:22.462386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:56:22.462452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9b53e4aacdd: link becomes ready Mar 17 18:56:22.462508 systemd-networkd[1121]: cali9b53e4aacdd: Link UP Mar 17 18:56:22.462612 systemd-networkd[1121]: cali9b53e4aacdd: Gained carrier Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.289 [INFO][4281] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.305 [INFO][4281] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9xnh4-eth0 csi-node-driver- calico-system 1165f5ec-1445-4386-b540-a9b8a16322f3 811 0 2025-03-17 18:55:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9xnh4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9b53e4aacdd [] []}} ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.305 [INFO][4281] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.419 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" HandleID="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.433 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" HandleID="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000100bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9xnh4", "timestamp":"2025-03-17 18:56:22.419424691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.433 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.433 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.433 [INFO][4296] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.434 [INFO][4296] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.437 [INFO][4296] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.439 [INFO][4296] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.442 [INFO][4296] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.443 [INFO][4296] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.443 [INFO][4296] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.446 [INFO][4296] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.450 [INFO][4296] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.454 [INFO][4296] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.455 [INFO][4296] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" host="localhost" Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.455 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:22.475867 env[1374]: 2025-03-17 18:56:22.455 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" HandleID="k8s-pod-network.4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.477169 env[1374]: 2025-03-17 18:56:22.456 [INFO][4281] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9xnh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1165f5ec-1445-4386-b540-a9b8a16322f3", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9xnh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b53e4aacdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:22.477169 env[1374]: 2025-03-17 18:56:22.456 [INFO][4281] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.477169 env[1374]: 2025-03-17 18:56:22.457 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b53e4aacdd ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.477169 env[1374]: 2025-03-17 18:56:22.465 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.477169 env[1374]: 2025-03-17 18:56:22.465 [INFO][4281] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9xnh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1165f5ec-1445-4386-b540-a9b8a16322f3", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de", Pod:"csi-node-driver-9xnh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b53e4aacdd", MAC:"b6:56:c7:0a:8d:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:22.477169 env[1374]: 2025-03-17 18:56:22.474 [INFO][4281] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de" Namespace="calico-system" Pod="csi-node-driver-9xnh4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:22.492399 env[1374]: time="2025-03-17T18:56:22.492344565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:22.492506 env[1374]: time="2025-03-17T18:56:22.492388451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:22.492506 env[1374]: time="2025-03-17T18:56:22.492397085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:22.492580 env[1374]: time="2025-03-17T18:56:22.492518455Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de pid=4346 runtime=io.containerd.runc.v2 Mar 17 18:56:22.515461 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:56:22.531417 env[1374]: time="2025-03-17T18:56:22.531392799Z" level=info msg="StartContainer for \"99a200267b42c17904941408536c52b8d061e33a976c7fe0f089257cbfa42751\" returns successfully" Mar 17 18:56:22.536174 env[1374]: time="2025-03-17T18:56:22.536098880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9xnh4,Uid:1165f5ec-1445-4386-b540-a9b8a16322f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de\"" Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.872882 kernel: kauditd_printk_skb: 49 callbacks suppressed Mar 17 18:56:22.872916 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.872936 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.872950 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.881462 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.881501 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.883980 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.884000 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.884015 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.888767 kernel: audit: type=1400 audit(1742237782.863:305): avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.863000 audit: BPF prog-id=10 op=LOAD Mar 17 18:56:22.893726 kernel: audit: type=1334 audit(1742237782.863:305): prog-id=10 op=LOAD Mar 17 18:56:22.863000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd57009a80 a2=98 a3=3 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.863000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.866000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.876000 audit: BPF prog-id=11 op=LOAD Mar 17 18:56:22.876000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd57009860 a2=74 a3=540051 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.876000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.882000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.882000 audit: BPF prog-id=12 op=LOAD Mar 17 18:56:22.882000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd57009890 a2=94 a3=2 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.882000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.887000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit: BPF prog-id=13 op=LOAD Mar 17 18:56:22.970000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd57009750 a2=40 a3=1 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.970000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:56:22.970000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.970000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd57009820 a2=50 a3=7ffd57009900 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.970000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.977000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.977000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd57009760 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.977000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.977000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd57009790 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.977000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.977000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd570096a0 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.977000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.977000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd570097b0 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd57009790 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd57009780 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd570097b0 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd57009790 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd570097b0 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd57009780 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd570097f0 a2=28 a3=0 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd570095a0 a2=50 a3=1 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.978000 audit: BPF prog-id=14 op=LOAD Mar 17 18:56:22.978000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd570095a0 a2=94 a3=5 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.979000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd57009650 a2=50 a3=1 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.979000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd57009770 a2=4 a3=38 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.979000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.979000 audit[4411]: AVC avc: denied { confidentiality } for pid=4411 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:56:22.979000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd570097c0 a2=94 a3=6 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.979000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { confidentiality } for pid=4411 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:56:22.980000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd57008f70 a2=94 a3=83 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.980000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { perfmon } for pid=4411 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { bpf } for pid=4411 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:22.980000 audit[4411]: AVC avc: denied { confidentiality } for pid=4411 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:56:22.980000 audit[4411]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd57008f70 a2=94 a3=83 items=0 ppid=4389 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:22.980000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.167000 audit: BPF prog-id=15 op=LOAD Mar 17 18:56:23.167000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeac5d1270 a2=98 a3=1999999999999999 items=0 ppid=4389 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.167000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:56:23.167000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit: BPF prog-id=16 op=LOAD Mar 17 18:56:23.168000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeac5d1150 a2=74 a3=ffff items=0 ppid=4389 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.168000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:56:23.168000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { perfmon } for pid=4432 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit[4432]: AVC avc: denied { bpf } for pid=4432 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.168000 audit: BPF prog-id=17 op=LOAD Mar 17 18:56:23.168000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeac5d1190 a2=40 a3=7ffeac5d1370 items=0 ppid=4389 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.168000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:56:23.168000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:56:23.250092 systemd-networkd[1121]: vxlan.calico: Link UP Mar 17 18:56:23.250097 systemd-networkd[1121]: vxlan.calico: Gained carrier Mar 17 18:56:23.290302 systemd[1]: run-containerd-runc-k8s.io-99a200267b42c17904941408536c52b8d061e33a976c7fe0f089257cbfa42751-runc.Fn2sa2.mount: Deactivated successfully. Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.301000 audit: BPF prog-id=18 op=LOAD Mar 17 18:56:23.301000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef31da3d0 a2=98 a3=ffffffff items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.301000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.301000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit: BPF prog-id=19 op=LOAD Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef31da1e0 a2=74 a3=540051 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit: BPF prog-id=20 op=LOAD Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef31da210 a2=94 a3=2 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffef31da0e0 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef31da110 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef31da020 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffef31da130 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffef31da110 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffef31da100 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffef31da130 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef31da110 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef31da130 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef31da100 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffef31da170 a2=28 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.303000 audit: BPF prog-id=21 op=LOAD Mar 17 18:56:23.303000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef31d9fe0 a2=40 a3=0 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.303000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.303000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffef31d9fd0 a2=50 a3=2800 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffef31d9fd0 a2=50 a3=2800 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit: BPF prog-id=22 op=LOAD Mar 17 18:56:23.305000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef31d97f0 a2=94 a3=2 items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.305000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { perfmon } for pid=4488 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit[4488]: AVC avc: denied { bpf } for pid=4488 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.305000 audit: BPF prog-id=23 op=LOAD Mar 17 18:56:23.305000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef31d98f0 a2=94 a3=2d items=0 ppid=4389 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit: BPF prog-id=24 op=LOAD Mar 17 18:56:23.311000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff154a4310 a2=98 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.311000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.311000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit: BPF prog-id=25 op=LOAD Mar 17 18:56:23.311000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff154a40f0 a2=74 a3=540051 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.311000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.311000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.311000 audit: BPF prog-id=26 op=LOAD Mar 17 18:56:23.311000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff154a4120 a2=94 a3=2 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.311000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.311000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:56:23.333000 audit[4506]: NETFILTER_CFG table=filter:103 family=2 entries=11 op=nft_register_rule pid=4506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:23.333000 audit[4506]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffee3d36d30 a2=0 a3=7ffee3d36d1c items=0 ppid=2501 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:23.337000 audit[4506]: NETFILTER_CFG table=nat:104 family=2 entries=25 op=nft_register_chain pid=4506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:23.337000 audit[4506]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffee3d36d30 a2=0 a3=7ffee3d36d1c items=0 ppid=2501 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:23.360780 kubelet[2365]: I0317 18:56:23.360634 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b484f5775-2np4h" podStartSLOduration=27.43785133 podStartE2EDuration="30.360620439s" podCreationTimestamp="2025-03-17 18:55:53 +0000 UTC" firstStartedPulling="2025-03-17 18:56:19.486051314 +0000 UTC m=+47.501596081" lastFinishedPulling="2025-03-17 18:56:22.40882042 +0000 UTC m=+50.424365190" observedRunningTime="2025-03-17 18:56:23.280390105 +0000 UTC m=+51.295934884" watchObservedRunningTime="2025-03-17 18:56:23.360620439 +0000 UTC m=+51.376165214" Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit: BPF prog-id=27 op=LOAD Mar 17 18:56:23.417000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff154a3fe0 a2=40 a3=1 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.417000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:56:23.417000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.417000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff154a40b0 a2=50 a3=7fff154a4190 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.425000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.425000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff154a3ff0 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.425000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff154a4020 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff154a3f30 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff154a4040 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff154a4020 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff154a4010 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff154a4040 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff154a4020 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff154a4040 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff154a4010 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.427000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.427000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff154a4080 a2=28 a3=0 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.427000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff154a3e30 a2=50 a3=1 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit: BPF prog-id=28 op=LOAD Mar 17 18:56:23.428000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff154a3e30 a2=94 a3=5 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.428000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff154a3ee0 a2=50 a3=1 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff154a4000 a2=4 a3=38 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.428000 audit[4491]: AVC avc: denied { confidentiality } for pid=4491 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:56:23.428000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff154a4050 a2=94 a3=6 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { confidentiality } for pid=4491 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:56:23.430000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff154a3800 a2=94 a3=83 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { perfmon } for pid=4491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.430000 audit[4491]: AVC avc: denied { confidentiality } for pid=4491 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:56:23.430000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff154a3800 a2=94 a3=83 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.431000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.431000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff154a5240 a2=10 a3=f1f00800 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.431000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.431000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff154a50e0 a2=10 a3=3 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.431000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.431000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff154a5080 a2=10 a3=3 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.431000 audit[4491]: AVC avc: denied { bpf } for pid=4491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:56:23.431000 audit[4491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff154a5080 a2=10 a3=7 items=0 ppid=4389 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:56:23.439000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:56:23.509000 audit[4538]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:56:23.509000 audit[4538]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffb3d78a80 a2=0 a3=7fffb3d78a6c items=0 ppid=4389 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.509000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:56:23.515000 audit[4539]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:56:23.515000 audit[4539]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd5a1de730 a2=0 a3=7ffd5a1de71c items=0 ppid=4389 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.515000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:56:23.519000 audit[4540]: NETFILTER_CFG table=filter:107 family=2 entries=209 op=nft_register_chain pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:56:23.519000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=122920 a0=3 a1=7ffcd7136240 a2=0 a3=7ffcd713622c items=0 ppid=4389 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:56:23.521000 audit[4537]: NETFILTER_CFG table=raw:108 family=2 entries=21 op=nft_register_chain pid=4537 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:56:23.521000 audit[4537]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd15e31610 a2=0 a3=7ffd15e315fc items=0 ppid=4389 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:23.521000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:56:23.850765 systemd-networkd[1121]: cali9b53e4aacdd: Gained IPv6LL Mar 17 18:56:24.553873 systemd-networkd[1121]: vxlan.calico: Gained IPv6LL Mar 17 18:56:25.339643 env[1374]: time="2025-03-17T18:56:25.339613490Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.342881 env[1374]: time="2025-03-17T18:56:25.342862450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.343918 env[1374]: time="2025-03-17T18:56:25.343904494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.344481 env[1374]: time="2025-03-17T18:56:25.344464823Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.344920 env[1374]: time="2025-03-17T18:56:25.344900600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Mar 17 18:56:25.346387 env[1374]: time="2025-03-17T18:56:25.346105515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:56:25.347443 env[1374]: time="2025-03-17T18:56:25.347369484Z" level=info msg="CreateContainer within sandbox \"9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:56:25.356095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209460522.mount: Deactivated successfully. Mar 17 18:56:25.358579 env[1374]: time="2025-03-17T18:56:25.358559478Z" level=info msg="CreateContainer within sandbox \"9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"115c301536a69ca3861f63eab98fca649d6475f9dd0dea5d2dd878e39d944fc9\"" Mar 17 18:56:25.359012 env[1374]: time="2025-03-17T18:56:25.358994715Z" level=info msg="StartContainer for \"115c301536a69ca3861f63eab98fca649d6475f9dd0dea5d2dd878e39d944fc9\"" Mar 17 18:56:25.378986 systemd[1]: run-containerd-runc-k8s.io-115c301536a69ca3861f63eab98fca649d6475f9dd0dea5d2dd878e39d944fc9-runc.jUSdir.mount: Deactivated successfully. Mar 17 18:56:25.420537 env[1374]: time="2025-03-17T18:56:25.420510468Z" level=info msg="StartContainer for \"115c301536a69ca3861f63eab98fca649d6475f9dd0dea5d2dd878e39d944fc9\" returns successfully" Mar 17 18:56:25.804149 env[1374]: time="2025-03-17T18:56:25.804127163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.813120 env[1374]: time="2025-03-17T18:56:25.813103154Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.823470 env[1374]: time="2025-03-17T18:56:25.823448735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.831364 env[1374]: time="2025-03-17T18:56:25.831343145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:25.831738 env[1374]: time="2025-03-17T18:56:25.831717663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Mar 17 18:56:25.836969 env[1374]: time="2025-03-17T18:56:25.836948169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Mar 17 18:56:25.837618 env[1374]: time="2025-03-17T18:56:25.837592571Z" level=info msg="CreateContainer within sandbox \"be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:56:25.919370 env[1374]: time="2025-03-17T18:56:25.919318075Z" level=info msg="CreateContainer within sandbox \"be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6ed591bd3224aa33da4857dc87252bf54a98d87f05db55220769a002eaaafa3\"" Mar 17 18:56:25.919877 env[1374]: time="2025-03-17T18:56:25.919859470Z" level=info msg="StartContainer for \"c6ed591bd3224aa33da4857dc87252bf54a98d87f05db55220769a002eaaafa3\"" Mar 17 18:56:26.071497 env[1374]: time="2025-03-17T18:56:26.071429161Z" level=info msg="StartContainer for \"c6ed591bd3224aa33da4857dc87252bf54a98d87f05db55220769a002eaaafa3\" returns successfully" Mar 17 18:56:26.443039 kubelet[2365]: I0317 18:56:26.442959 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-587b6b84cf-tshpw" podStartSLOduration=28.788762613 podStartE2EDuration="33.442943042s" podCreationTimestamp="2025-03-17 18:55:53 +0000 UTC" firstStartedPulling="2025-03-17 18:56:20.691561451 +0000 UTC m=+48.707106219" lastFinishedPulling="2025-03-17 18:56:25.345741878 +0000 UTC m=+53.361286648" observedRunningTime="2025-03-17 18:56:26.429533536 +0000 UTC m=+54.445078316" watchObservedRunningTime="2025-03-17 18:56:26.442943042 +0000 UTC m=+54.458487816" Mar 17 18:56:26.451950 kubelet[2365]: I0317 18:56:26.451916 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-587b6b84cf-nvpll" podStartSLOduration=28.312028528 podStartE2EDuration="33.451903785s" podCreationTimestamp="2025-03-17 18:55:53 +0000 UTC" firstStartedPulling="2025-03-17 18:56:20.692386408 +0000 UTC m=+48.707931176" lastFinishedPulling="2025-03-17 18:56:25.832261659 +0000 UTC m=+53.847806433" observedRunningTime="2025-03-17 18:56:26.451759651 +0000 UTC m=+54.467304434" watchObservedRunningTime="2025-03-17 18:56:26.451903785 +0000 UTC m=+54.467448552" Mar 17 18:56:26.485000 audit[4622]: NETFILTER_CFG table=filter:109 family=2 entries=10 op=nft_register_rule pid=4622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:26.485000 audit[4622]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffda0a9de40 a2=0 a3=7ffda0a9de2c items=0 ppid=2501 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:26.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:26.489000 audit[4622]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=4622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:26.489000 audit[4622]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda0a9de40 a2=0 a3=7ffda0a9de2c items=0 ppid=2501 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:26.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:26.513000 audit[4624]: NETFILTER_CFG table=filter:111 family=2 entries=9 op=nft_register_rule pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:26.513000 audit[4624]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe9cead380 a2=0 a3=7ffe9cead36c items=0 ppid=2501 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:26.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:26.519000 audit[4624]: NETFILTER_CFG table=nat:112 family=2 entries=27 op=nft_register_chain pid=4624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:26.519000 audit[4624]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe9cead380 a2=0 a3=7ffe9cead36c items=0 ppid=2501 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:26.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:27.353937 kubelet[2365]: I0317 18:56:27.351876 2365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:56:27.600786 env[1374]: time="2025-03-17T18:56:27.600761964Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:27.615420 env[1374]: time="2025-03-17T18:56:27.615377778Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:27.623116 env[1374]: time="2025-03-17T18:56:27.623093491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:27.627905 env[1374]: time="2025-03-17T18:56:27.627886547Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:27.628271 env[1374]: time="2025-03-17T18:56:27.628251143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Mar 17 18:56:27.700009 env[1374]: time="2025-03-17T18:56:27.699974483Z" level=info msg="CreateContainer within sandbox \"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 18:56:27.713627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049049135.mount: Deactivated successfully. Mar 17 18:56:27.716773 env[1374]: time="2025-03-17T18:56:27.716750713Z" level=info msg="CreateContainer within sandbox \"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e68be7a87e2c0187c218fb96efa245295534a3b2fb73259bac2fd5ec0c007fc3\"" Mar 17 18:56:27.717909 env[1374]: time="2025-03-17T18:56:27.717304904Z" level=info msg="StartContainer for \"e68be7a87e2c0187c218fb96efa245295534a3b2fb73259bac2fd5ec0c007fc3\"" Mar 17 18:56:27.768340 env[1374]: time="2025-03-17T18:56:27.767253015Z" level=info msg="StartContainer for \"e68be7a87e2c0187c218fb96efa245295534a3b2fb73259bac2fd5ec0c007fc3\" returns successfully" Mar 17 18:56:27.771505 env[1374]: time="2025-03-17T18:56:27.771471005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Mar 17 18:56:29.474215 env[1374]: time="2025-03-17T18:56:29.474186512Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:29.475371 env[1374]: time="2025-03-17T18:56:29.475358553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:29.476298 env[1374]: time="2025-03-17T18:56:29.476287283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:29.477132 env[1374]: time="2025-03-17T18:56:29.477120602Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:56:29.477808 env[1374]: time="2025-03-17T18:56:29.477397947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Mar 17 18:56:29.484358 env[1374]: time="2025-03-17T18:56:29.484340164Z" level=info msg="CreateContainer within sandbox \"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 18:56:29.492307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819526045.mount: Deactivated successfully. Mar 17 18:56:29.500972 env[1374]: time="2025-03-17T18:56:29.500952458Z" level=info msg="CreateContainer within sandbox \"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2828ac640a709c6a5b92686db5693def3c994ea767d121febcda269a0c7f93ca\"" Mar 17 18:56:29.503226 env[1374]: time="2025-03-17T18:56:29.503210874Z" level=info msg="StartContainer for \"2828ac640a709c6a5b92686db5693def3c994ea767d121febcda269a0c7f93ca\"" Mar 17 18:56:29.556092 env[1374]: time="2025-03-17T18:56:29.556068928Z" level=info msg="StartContainer for \"2828ac640a709c6a5b92686db5693def3c994ea767d121febcda269a0c7f93ca\" returns successfully" Mar 17 18:56:30.293227 kubelet[2365]: I0317 18:56:30.292234 2365 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 18:56:30.295284 kubelet[2365]: I0317 18:56:30.295274 2365 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 18:56:30.403049 kubelet[2365]: I0317 18:56:30.403006 2365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9xnh4" podStartSLOduration=30.458098723 podStartE2EDuration="37.399506603s" podCreationTimestamp="2025-03-17 18:55:53 +0000 UTC" firstStartedPulling="2025-03-17 18:56:22.536932394 +0000 UTC m=+50.552477165" lastFinishedPulling="2025-03-17 18:56:29.478340275 +0000 UTC m=+57.493885045" observedRunningTime="2025-03-17 18:56:30.398480721 +0000 UTC m=+58.414025499" watchObservedRunningTime="2025-03-17 18:56:30.399506603 +0000 UTC m=+58.415051377" Mar 17 18:56:30.489438 systemd[1]: run-containerd-runc-k8s.io-2828ac640a709c6a5b92686db5693def3c994ea767d121febcda269a0c7f93ca-runc.M8W5Sa.mount: Deactivated successfully. Mar 17 18:56:32.310821 env[1374]: time="2025-03-17T18:56:32.310534233Z" level=info msg="StopPodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\"" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:32.947 [WARNING][4720] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"885d0d3f-49ac-45f6-83b5-96856df9e4b0", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a", Pod:"calico-apiserver-587b6b84cf-nvpll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb64b138edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:32.950 [INFO][4720] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:32.950 [INFO][4720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" iface="eth0" netns="" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:32.950 [INFO][4720] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:32.950 [INFO][4720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.448 [INFO][4726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.450 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.450 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.460 [WARNING][4726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.460 [INFO][4726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.461 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.464111 env[1374]: 2025-03-17 18:56:33.462 [INFO][4720] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.467714 env[1374]: time="2025-03-17T18:56:33.464622743Z" level=info msg="TearDown network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" successfully" Mar 17 18:56:33.467714 env[1374]: time="2025-03-17T18:56:33.464649818Z" level=info msg="StopPodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" returns successfully" Mar 17 18:56:33.468535 env[1374]: time="2025-03-17T18:56:33.468519641Z" level=info msg="RemovePodSandbox for \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\"" Mar 17 18:56:33.469239 env[1374]: time="2025-03-17T18:56:33.468612277Z" level=info msg="Forcibly stopping sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\"" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.496 [WARNING][4744] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"885d0d3f-49ac-45f6-83b5-96856df9e4b0", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be7eeed475cd5cae20423ab3082c8a8ed5bfad68f7c6d581f819a5b64f1d6b9a", Pod:"calico-apiserver-587b6b84cf-nvpll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb64b138edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.496 [INFO][4744] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.496 [INFO][4744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" iface="eth0" netns="" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.496 [INFO][4744] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.496 [INFO][4744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.511 [INFO][4750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.511 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.511 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.517 [WARNING][4750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.517 [INFO][4750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" HandleID="k8s-pod-network.5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Workload="localhost-k8s-calico--apiserver--587b6b84cf--nvpll-eth0" Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.518 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.521492 env[1374]: 2025-03-17 18:56:33.519 [INFO][4744] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b" Mar 17 18:56:33.524680 env[1374]: time="2025-03-17T18:56:33.521613987Z" level=info msg="TearDown network for sandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" successfully" Mar 17 18:56:33.529593 env[1374]: time="2025-03-17T18:56:33.529571389Z" level=info msg="RemovePodSandbox \"5363aa4b94b9236647860e52880d93ef485efc73f196918bdd2044d058bcb94b\" returns successfully" Mar 17 18:56:33.530067 env[1374]: time="2025-03-17T18:56:33.530053737Z" level=info msg="StopPodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\"" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.559 [WARNING][4777] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6", Pod:"coredns-7db6d8ff4d-pnp4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c3d9f9999d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.559 [INFO][4777] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.559 [INFO][4777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" iface="eth0" netns="" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.559 [INFO][4777] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.559 [INFO][4777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.578 [INFO][4784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.578 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.578 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.582 [WARNING][4784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.582 [INFO][4784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.583 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.585769 env[1374]: 2025-03-17 18:56:33.584 [INFO][4777] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.596740 env[1374]: time="2025-03-17T18:56:33.585794737Z" level=info msg="TearDown network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" successfully" Mar 17 18:56:33.596740 env[1374]: time="2025-03-17T18:56:33.585820480Z" level=info msg="StopPodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" returns successfully" Mar 17 18:56:33.641898 env[1374]: time="2025-03-17T18:56:33.641874741Z" level=info msg="RemovePodSandbox for \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\"" Mar 17 18:56:33.642031 env[1374]: time="2025-03-17T18:56:33.642006084Z" level=info msg="Forcibly stopping sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\"" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.670 [WARNING][4802] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"14dd1746-3c8a-4dad-bbd6-5fdc39d5ff04", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8db486b3a313655cb1cea96d95b2eede8b2595d23c4e84e9ab64b800d4d663c6", Pod:"coredns-7db6d8ff4d-pnp4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c3d9f9999d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.671 [INFO][4802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.671 [INFO][4802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" iface="eth0" netns="" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.671 [INFO][4802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.671 [INFO][4802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.684 [INFO][4808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.684 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.684 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.688 [WARNING][4808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.688 [INFO][4808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" HandleID="k8s-pod-network.4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Workload="localhost-k8s-coredns--7db6d8ff4d--pnp4k-eth0" Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.689 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.691721 env[1374]: 2025-03-17 18:56:33.690 [INFO][4802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5" Mar 17 18:56:33.702317 env[1374]: time="2025-03-17T18:56:33.691737910Z" level=info msg="TearDown network for sandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" successfully" Mar 17 18:56:33.702317 env[1374]: time="2025-03-17T18:56:33.699453316Z" level=info msg="RemovePodSandbox \"4c4f8579042260aba5cd34ce7fbe0d5de97c4b51906bef63047b9403443b37e5\" returns successfully" Mar 17 18:56:33.702317 env[1374]: time="2025-03-17T18:56:33.700821654Z" level=info msg="StopPodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\"" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.733 [WARNING][4827] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0", GenerateName:"calico-kube-controllers-5b484f5775-", Namespace:"calico-system", SelfLink:"", UID:"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b484f5775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a", Pod:"calico-kube-controllers-5b484f5775-2np4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid64bdfa6a82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.733 [INFO][4827] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.733 [INFO][4827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" iface="eth0" netns="" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.733 [INFO][4827] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.733 [INFO][4827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.749 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.749 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.749 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.753 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.753 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.754 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.756593 env[1374]: 2025-03-17 18:56:33.755 [INFO][4827] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.757206 env[1374]: time="2025-03-17T18:56:33.756612246Z" level=info msg="TearDown network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" successfully" Mar 17 18:56:33.757206 env[1374]: time="2025-03-17T18:56:33.756636959Z" level=info msg="StopPodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" returns successfully" Mar 17 18:56:33.757359 env[1374]: time="2025-03-17T18:56:33.757344729Z" level=info msg="RemovePodSandbox for \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\"" Mar 17 18:56:33.757449 env[1374]: time="2025-03-17T18:56:33.757425498Z" level=info msg="Forcibly stopping sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\"" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.783 [WARNING][4851] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0", GenerateName:"calico-kube-controllers-5b484f5775-", Namespace:"calico-system", SelfLink:"", UID:"1b4e72cc-9c1e-4894-b12d-91c7c43e0b2e", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b484f5775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53a9b2092ba20491358f8c360d0ea8e585c789b14db14a62d2bfecc5af87af8a", Pod:"calico-kube-controllers-5b484f5775-2np4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid64bdfa6a82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.783 [INFO][4851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.783 [INFO][4851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" iface="eth0" netns="" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.783 [INFO][4851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.783 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.799 [INFO][4857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.799 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.799 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.802 [WARNING][4857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.802 [INFO][4857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" HandleID="k8s-pod-network.d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Workload="localhost-k8s-calico--kube--controllers--5b484f5775--2np4h-eth0" Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.803 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.805931 env[1374]: 2025-03-17 18:56:33.804 [INFO][4851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126" Mar 17 18:56:33.807300 env[1374]: time="2025-03-17T18:56:33.806205210Z" level=info msg="TearDown network for sandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" successfully" Mar 17 18:56:33.807702 env[1374]: time="2025-03-17T18:56:33.807687905Z" level=info msg="RemovePodSandbox \"d33229e751fd5d3a612975e99cba148ce3795a70ef61540367ad752bedc2c126\" returns successfully" Mar 17 18:56:33.808051 env[1374]: time="2025-03-17T18:56:33.808038210Z" level=info msg="StopPodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\"" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.831 [WARNING][4875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a66bb45-5aee-4882-aac7-d77a88647401", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb", Pod:"calico-apiserver-587b6b84cf-tshpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d6eabedab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.831 [INFO][4875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.831 [INFO][4875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" iface="eth0" netns="" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.831 [INFO][4875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.831 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.844 [INFO][4881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.844 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.844 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.849 [WARNING][4881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.849 [INFO][4881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.850 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.851976 env[1374]: 2025-03-17 18:56:33.851 [INFO][4875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.852358 env[1374]: time="2025-03-17T18:56:33.852337805Z" level=info msg="TearDown network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" successfully" Mar 17 18:56:33.852446 env[1374]: time="2025-03-17T18:56:33.852434077Z" level=info msg="StopPodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" returns successfully" Mar 17 18:56:33.852837 env[1374]: time="2025-03-17T18:56:33.852818790Z" level=info msg="RemovePodSandbox for \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\"" Mar 17 18:56:33.852873 env[1374]: time="2025-03-17T18:56:33.852840258Z" level=info msg="Forcibly stopping sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\"" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.873 [WARNING][4899] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0", GenerateName:"calico-apiserver-587b6b84cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a66bb45-5aee-4882-aac7-d77a88647401", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587b6b84cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9687dec192075599504d74362e43977fd3b9ffd9730f94e03b7fe8ffa8dd3cbb", Pod:"calico-apiserver-587b6b84cf-tshpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d6eabedab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.873 [INFO][4899] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.873 [INFO][4899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" iface="eth0" netns="" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.873 [INFO][4899] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.873 [INFO][4899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.891 [INFO][4905] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.891 [INFO][4905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.891 [INFO][4905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.895 [WARNING][4905] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.895 [INFO][4905] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" HandleID="k8s-pod-network.2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Workload="localhost-k8s-calico--apiserver--587b6b84cf--tshpw-eth0" Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.895 [INFO][4905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.897843 env[1374]: 2025-03-17 18:56:33.896 [INFO][4899] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e" Mar 17 18:56:33.902792 env[1374]: time="2025-03-17T18:56:33.897862034Z" level=info msg="TearDown network for sandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" successfully" Mar 17 18:56:33.906048 env[1374]: time="2025-03-17T18:56:33.904990801Z" level=info msg="RemovePodSandbox \"2b6272934b19e859eb11817b13cfda1f7e7f2eda2ffe337606b90aa68afbb34e\" returns successfully" Mar 17 18:56:33.906048 env[1374]: time="2025-03-17T18:56:33.905251164Z" level=info msg="StopPodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\"" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.967 [WARNING][4923] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9xnh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1165f5ec-1445-4386-b540-a9b8a16322f3", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de", Pod:"csi-node-driver-9xnh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b53e4aacdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.967 [INFO][4923] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.967 [INFO][4923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" iface="eth0" netns="" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.967 [INFO][4923] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.967 [INFO][4923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.980 [INFO][4929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.981 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.981 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.984 [WARNING][4929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.985 [INFO][4929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.985 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:33.987710 env[1374]: 2025-03-17 18:56:33.986 [INFO][4923] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:33.988920 env[1374]: time="2025-03-17T18:56:33.988093729Z" level=info msg="TearDown network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" successfully" Mar 17 18:56:33.988920 env[1374]: time="2025-03-17T18:56:33.988113343Z" level=info msg="StopPodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" returns successfully" Mar 17 18:56:33.988920 env[1374]: time="2025-03-17T18:56:33.988413149Z" level=info msg="RemovePodSandbox for \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\"" Mar 17 18:56:33.988920 env[1374]: time="2025-03-17T18:56:33.988434555Z" level=info msg="Forcibly stopping sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\"" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.010 [WARNING][4947] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9xnh4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1165f5ec-1445-4386-b540-a9b8a16322f3", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4894f9124a9318b66513af8ba5043a8158b92c40f35d37e3427b18d90f25c6de", Pod:"csi-node-driver-9xnh4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b53e4aacdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.010 [INFO][4947] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.011 [INFO][4947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" iface="eth0" netns="" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.011 [INFO][4947] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.011 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.025 [INFO][4953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.025 [INFO][4953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.025 [INFO][4953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.028 [WARNING][4953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.028 [INFO][4953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" HandleID="k8s-pod-network.34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Workload="localhost-k8s-csi--node--driver--9xnh4-eth0" Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.029 [INFO][4953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:34.032981 env[1374]: 2025-03-17 18:56:34.030 [INFO][4947] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a" Mar 17 18:56:34.032981 env[1374]: time="2025-03-17T18:56:34.031780067Z" level=info msg="TearDown network for sandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" successfully" Mar 17 18:56:34.034130 env[1374]: time="2025-03-17T18:56:34.034111762Z" level=info msg="RemovePodSandbox \"34b6fb056c4ba8264d1f360319951cf53406c1dae9462f925580affb19c53b0a\" returns successfully" Mar 17 18:56:34.034455 env[1374]: time="2025-03-17T18:56:34.034441724Z" level=info msg="StopPodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\"" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.055 [WARNING][4972] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82369388-1126-47fe-9068-8215d11684f0", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a", Pod:"coredns-7db6d8ff4d-zzgcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56d1bffecf8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.055 [INFO][4972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.055 [INFO][4972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" iface="eth0" netns="" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.055 [INFO][4972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.055 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.069 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.069 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.069 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.073 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.073 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.073 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:34.076019 env[1374]: 2025-03-17 18:56:34.074 [INFO][4972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.077057 env[1374]: time="2025-03-17T18:56:34.076039102Z" level=info msg="TearDown network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" successfully" Mar 17 18:56:34.077057 env[1374]: time="2025-03-17T18:56:34.076058542Z" level=info msg="StopPodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" returns successfully" Mar 17 18:56:34.077057 env[1374]: time="2025-03-17T18:56:34.076520803Z" level=info msg="RemovePodSandbox for \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\"" Mar 17 18:56:34.077057 env[1374]: time="2025-03-17T18:56:34.076539177Z" level=info msg="Forcibly stopping sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\"" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.103 [WARNING][4996] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"82369388-1126-47fe-9068-8215d11684f0", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 55, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f657534b905e25d12ff7ecd5a03e5ae84499895da28a28fb76621e29ebb3f37a", Pod:"coredns-7db6d8ff4d-zzgcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali56d1bffecf8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.103 [INFO][4996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.103 [INFO][4996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" iface="eth0" netns="" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.103 [INFO][4996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.104 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.126 [INFO][5002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.126 [INFO][5002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.126 [INFO][5002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.129 [WARNING][5002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.129 [INFO][5002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" HandleID="k8s-pod-network.c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Workload="localhost-k8s-coredns--7db6d8ff4d--zzgcp-eth0" Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.130 [INFO][5002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:56:34.133057 env[1374]: 2025-03-17 18:56:34.131 [INFO][4996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300" Mar 17 18:56:34.133516 env[1374]: time="2025-03-17T18:56:34.133495412Z" level=info msg="TearDown network for sandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" successfully" Mar 17 18:56:34.142103 env[1374]: time="2025-03-17T18:56:34.142079122Z" level=info msg="RemovePodSandbox \"c951809ac0cc2739b43f39fbcc508c8bfe4166159319b84efd48fecdf57bf300\" returns successfully" Mar 17 18:56:35.690669 systemd[1]: run-containerd-runc-k8s.io-99a200267b42c17904941408536c52b8d061e33a976c7fe0f089257cbfa42751-runc.R50jMx.mount: Deactivated successfully. Mar 17 18:56:43.993733 systemd[1]: Started sshd@7-139.178.70.99:22-139.178.68.195:36574.service. Mar 17 18:56:44.006112 kernel: kauditd_printk_skb: 488 callbacks suppressed Mar 17 18:56:44.012524 kernel: audit: type=1130 audit(1742237803.995:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.99:22-139.178.68.195:36574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:43.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.99:22-139.178.68.195:36574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:44.132748 kernel: audit: type=1101 audit(1742237804.127:407): pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.127000 audit[5038]: USER_ACCT pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.144302 kernel: audit: type=1103 audit(1742237804.132:408): pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.144337 kernel: audit: type=1006 audit(1742237804.132:409): pid=5038 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Mar 17 18:56:44.144362 kernel: audit: type=1300 audit(1742237804.132:409): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda840c140 a2=3 a3=0 items=0 ppid=1 pid=5038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:44.132000 audit[5038]: CRED_ACQ pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.145997 kernel: audit: type=1327 audit(1742237804.132:409): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:44.132000 audit[5038]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda840c140 a2=3 a3=0 items=0 ppid=1 pid=5038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:44.132000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:44.148236 sshd[5038]: Accepted publickey for core from 139.178.68.195 port 36574 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:56:44.146014 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:44.171374 systemd-logind[1338]: New session 10 of user core. Mar 17 18:56:44.171776 systemd[1]: Started session-10.scope. Mar 17 18:56:44.174000 audit[5038]: USER_START pid=5038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.178000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.182026 kernel: audit: type=1105 audit(1742237804.174:410): pid=5038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:44.182062 kernel: audit: type=1103 audit(1742237804.178:411): pid=5041 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:45.394347 sshd[5038]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:45.394000 audit[5038]: USER_END pid=5038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:45.399206 systemd[1]: sshd@7-139.178.70.99:22-139.178.68.195:36574.service: Deactivated successfully. Mar 17 18:56:45.399680 kernel: audit: type=1106 audit(1742237805.394:412): pid=5038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:45.394000 audit[5038]: CRED_DISP pid=5038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:45.399957 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:56:45.399992 systemd-logind[1338]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:56:45.404160 kernel: audit: type=1104 audit(1742237805.394:413): pid=5038 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:45.404004 systemd-logind[1338]: Removed session 10. Mar 17 18:56:45.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.99:22-139.178.68.195:36574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:46.366471 systemd[1]: run-containerd-runc-k8s.io-bc402cd8f0d3eb00deac3671dd803fd3f25210ced477678b0696d88d56c67c12-runc.TVHbVw.mount: Deactivated successfully. Mar 17 18:56:50.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.99:22-139.178.68.195:49598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:50.431346 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:56:50.431385 kernel: audit: type=1130 audit(1742237810.416:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.99:22-139.178.68.195:49598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:50.417487 systemd[1]: Started sshd@8-139.178.70.99:22-139.178.68.195:49598.service. Mar 17 18:56:51.156000 audit[5083]: USER_ACCT pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.158595 sshd[5083]: Accepted publickey for core from 139.178.68.195 port 49598 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:56:51.160677 kernel: audit: type=1101 audit(1742237811.156:416): pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.160000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.163236 sshd[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:51.166022 kernel: audit: type=1103 audit(1742237811.160:417): pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.166069 kernel: audit: type=1006 audit(1742237811.160:418): pid=5083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Mar 17 18:56:51.166092 kernel: audit: type=1300 audit(1742237811.160:418): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0e051370 a2=3 a3=0 items=0 ppid=1 pid=5083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:51.160000 audit[5083]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0e051370 a2=3 a3=0 items=0 ppid=1 pid=5083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:51.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:51.170447 kernel: audit: type=1327 audit(1742237811.160:418): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:51.172229 systemd-logind[1338]: New session 11 of user core. Mar 17 18:56:51.172547 systemd[1]: Started session-11.scope. Mar 17 18:56:51.175000 audit[5083]: USER_START pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.176000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.182368 kernel: audit: type=1105 audit(1742237811.175:419): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.182410 kernel: audit: type=1103 audit(1742237811.176:420): pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.429086 sshd[5083]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:51.430000 audit[5083]: USER_END pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.435745 kernel: audit: type=1106 audit(1742237811.430:421): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.436329 kernel: audit: type=1104 audit(1742237811.430:422): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.430000 audit[5083]: CRED_DISP pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:51.438378 systemd[1]: sshd@8-139.178.70.99:22-139.178.68.195:49598.service: Deactivated successfully. Mar 17 18:56:51.438869 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:56:51.439934 systemd-logind[1338]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:56:51.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.99:22-139.178.68.195:49598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:51.440649 systemd-logind[1338]: Removed session 11. Mar 17 18:56:53.063329 kubelet[2365]: I0317 18:56:53.056064 2365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:56:53.708000 audit[5097]: NETFILTER_CFG table=filter:113 family=2 entries=8 op=nft_register_rule pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:53.708000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd093571e0 a2=0 a3=7ffd093571cc items=0 ppid=2501 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:53.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:53.713000 audit[5097]: NETFILTER_CFG table=nat:114 family=2 entries=34 op=nft_register_chain pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:56:53.713000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd093571e0 a2=0 a3=7ffd093571cc items=0 ppid=2501 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:53.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:56:56.431606 systemd[1]: Started sshd@9-139.178.70.99:22-139.178.68.195:40642.service. Mar 17 18:56:56.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.99:22-139.178.68.195:40642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:56.432779 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:56:56.432819 kernel: audit: type=1130 audit(1742237816.431:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.99:22-139.178.68.195:40642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:56.610000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.612870 sshd[5098]: Accepted publickey for core from 139.178.68.195 port 40642 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:56:56.615715 kernel: audit: type=1101 audit(1742237816.610:427): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.615000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.621470 kernel: audit: type=1103 audit(1742237816.615:428): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.621609 kernel: audit: type=1006 audit(1742237816.615:429): pid=5098 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Mar 17 18:56:56.621635 kernel: audit: type=1300 audit(1742237816.615:429): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff262c1940 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:56.615000 audit[5098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff262c1940 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:56.615000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:56.625177 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:56.626102 kernel: audit: type=1327 audit(1742237816.615:429): proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:56.630070 systemd-logind[1338]: New session 12 of user core. Mar 17 18:56:56.630437 systemd[1]: Started session-12.scope. Mar 17 18:56:56.633000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.634000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.640542 kernel: audit: type=1105 audit(1742237816.633:430): pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.640990 kernel: audit: type=1103 audit(1742237816.634:431): pid=5101 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.860615 systemd[1]: Started sshd@10-139.178.70.99:22-139.178.68.195:40658.service. Mar 17 18:56:56.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.99:22-139.178.68.195:40658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:56.861339 sshd[5098]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:56.863000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.865518 systemd[1]: sshd@9-139.178.70.99:22-139.178.68.195:40642.service: Deactivated successfully. Mar 17 18:56:56.866070 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:56:56.868243 kernel: audit: type=1130 audit(1742237816.860:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.99:22-139.178.68.195:40658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:56.868282 kernel: audit: type=1106 audit(1742237816.863:433): pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.868641 systemd-logind[1338]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:56:56.863000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.99:22-139.178.68.195:40642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:56.869427 systemd-logind[1338]: Removed session 12. Mar 17 18:56:56.976000 audit[5109]: USER_ACCT pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.977000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.977000 audit[5109]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb30aa670 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:56.977000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:56.985000 audit[5109]: USER_START pid=5109 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.986000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:56.981714 systemd-logind[1338]: New session 13 of user core. Mar 17 18:56:56.978368 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:56.993972 sshd[5109]: Accepted publickey for core from 139.178.68.195 port 40658 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:56:56.982174 systemd[1]: Started session-13.scope. Mar 17 18:56:57.307051 sshd[5109]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:57.308000 audit[5109]: USER_END pid=5109 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.308000 audit[5109]: CRED_DISP pid=5109 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.309186 systemd[1]: Started sshd@11-139.178.70.99:22-139.178.68.195:40670.service. Mar 17 18:56:57.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.99:22-139.178.68.195:40670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:57.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.99:22-139.178.68.195:40658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:57.313440 systemd[1]: sshd@10-139.178.70.99:22-139.178.68.195:40658.service: Deactivated successfully. Mar 17 18:56:57.314489 systemd-logind[1338]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:56:57.314527 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:56:57.316355 systemd-logind[1338]: Removed session 13. Mar 17 18:56:57.354000 audit[5120]: USER_ACCT pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.355797 sshd[5120]: Accepted publickey for core from 139.178.68.195 port 40670 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:56:57.355000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.356000 audit[5120]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe42607430 a2=3 a3=0 items=0 ppid=1 pid=5120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:56:57.356000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:56:57.356971 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:56:57.359742 systemd-logind[1338]: New session 14 of user core. Mar 17 18:56:57.360036 systemd[1]: Started session-14.scope. Mar 17 18:56:57.362000 audit[5120]: USER_START pid=5120 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.363000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.464739 sshd[5120]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:57.464000 audit[5120]: USER_END pid=5120 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.465000 audit[5120]: CRED_DISP pid=5120 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:56:57.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.99:22-139.178.68.195:40670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:56:57.466593 systemd[1]: sshd@11-139.178.70.99:22-139.178.68.195:40670.service: Deactivated successfully. Mar 17 18:56:57.467487 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:56:57.467509 systemd-logind[1338]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:56:57.468294 systemd-logind[1338]: Removed session 14. Mar 17 18:57:02.467155 systemd[1]: Started sshd@12-139.178.70.99:22-139.178.68.195:40680.service. Mar 17 18:57:02.471363 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:57:02.471400 kernel: audit: type=1130 audit(1742237822.466:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.99:22-139.178.68.195:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:02.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.99:22-139.178.68.195:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:02.555000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.561013 sshd[5159]: Accepted publickey for core from 139.178.68.195 port 40680 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:02.561685 kernel: audit: type=1101 audit(1742237822.555:454): pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.561000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.567936 kernel: audit: type=1103 audit(1742237822.561:455): pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.567971 kernel: audit: type=1006 audit(1742237822.561:456): pid=5159 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Mar 17 18:57:02.571482 kernel: audit: type=1300 audit(1742237822.561:456): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3e3a9900 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:02.571505 kernel: audit: type=1327 audit(1742237822.561:456): proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:02.561000 audit[5159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3e3a9900 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:02.561000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:02.572764 sshd[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:02.577170 systemd-logind[1338]: New session 15 of user core. Mar 17 18:57:02.577481 systemd[1]: Started session-15.scope. Mar 17 18:57:02.579000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.584000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.587312 kernel: audit: type=1105 audit(1742237822.579:457): pid=5159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.587354 kernel: audit: type=1103 audit(1742237822.584:458): pid=5162 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.812373 sshd[5159]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:02.813000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.814000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.821013 kernel: audit: type=1106 audit(1742237822.813:459): pid=5159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.821054 kernel: audit: type=1104 audit(1742237822.814:460): pid=5159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:02.822480 systemd-logind[1338]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:57:02.822635 systemd[1]: sshd@12-139.178.70.99:22-139.178.68.195:40680.service: Deactivated successfully. Mar 17 18:57:02.823155 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:57:02.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.99:22-139.178.68.195:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:02.823439 systemd-logind[1338]: Removed session 15. Mar 17 18:57:05.646849 systemd[1]: run-containerd-runc-k8s.io-99a200267b42c17904941408536c52b8d061e33a976c7fe0f089257cbfa42751-runc.kBod6x.mount: Deactivated successfully. Mar 17 18:57:07.813561 systemd[1]: Started sshd@13-139.178.70.99:22-139.178.68.195:34230.service. Mar 17 18:57:07.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.99:22-139.178.68.195:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:07.823319 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:57:07.833808 kernel: audit: type=1130 audit(1742237827.813:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.99:22-139.178.68.195:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:08.021000 audit[5200]: USER_ACCT pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.021870 sshd[5200]: Accepted publickey for core from 139.178.68.195 port 34230 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:08.025747 kernel: audit: type=1101 audit(1742237828.021:463): pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.025000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.026162 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:08.031223 kernel: audit: type=1103 audit(1742237828.025:464): pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.031273 kernel: audit: type=1006 audit(1742237828.025:465): pid=5200 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Mar 17 18:57:08.031778 kernel: audit: type=1300 audit(1742237828.025:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca6a304c0 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:08.025000 audit[5200]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca6a304c0 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:08.025000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:08.036476 kernel: audit: type=1327 audit(1742237828.025:465): proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:08.037643 systemd-logind[1338]: New session 16 of user core. Mar 17 18:57:08.038003 systemd[1]: Started session-16.scope. Mar 17 18:57:08.040000 audit[5200]: USER_START pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.040000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.048041 kernel: audit: type=1105 audit(1742237828.040:466): pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.048093 kernel: audit: type=1103 audit(1742237828.040:467): pid=5203 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.196786 sshd[5200]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:08.197000 audit[5200]: USER_END pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.199182 systemd[1]: sshd@13-139.178.70.99:22-139.178.68.195:34230.service: Deactivated successfully. Mar 17 18:57:08.199687 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:57:08.197000 audit[5200]: CRED_DISP pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.205251 kernel: audit: type=1106 audit(1742237828.197:468): pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.205304 kernel: audit: type=1104 audit(1742237828.197:469): pid=5200 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:08.205377 systemd-logind[1338]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:57:08.205941 systemd-logind[1338]: Removed session 16. Mar 17 18:57:08.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.99:22-139.178.68.195:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:13.199258 systemd[1]: Started sshd@14-139.178.70.99:22-139.178.68.195:34240.service. Mar 17 18:57:13.214949 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:57:13.214992 kernel: audit: type=1130 audit(1742237833.198:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.99:22-139.178.68.195:34240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:13.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.99:22-139.178.68.195:34240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:13.293000 audit[5212]: USER_ACCT pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.294780 sshd[5212]: Accepted publickey for core from 139.178.68.195 port 34240 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:13.298680 kernel: audit: type=1101 audit(1742237833.293:472): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.298000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.299153 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:13.308924 kernel: audit: type=1103 audit(1742237833.298:473): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.308956 kernel: audit: type=1006 audit(1742237833.298:474): pid=5212 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Mar 17 18:57:13.308974 kernel: audit: type=1300 audit(1742237833.298:474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcd20de10 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:13.308993 kernel: audit: type=1327 audit(1742237833.298:474): proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:13.298000 audit[5212]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcd20de10 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:13.298000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:13.309917 systemd[1]: Started session-17.scope. Mar 17 18:57:13.310775 systemd-logind[1338]: New session 17 of user core. Mar 17 18:57:13.313000 audit[5212]: USER_START pid=5212 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.314000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.320293 kernel: audit: type=1105 audit(1742237833.313:475): pid=5212 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.320336 kernel: audit: type=1103 audit(1742237833.314:476): pid=5215 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.571574 sshd[5212]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:13.572000 audit[5212]: USER_END pid=5212 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.572000 audit[5212]: CRED_DISP pid=5212 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.574606 systemd-logind[1338]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:57:13.575377 systemd[1]: sshd@14-139.178.70.99:22-139.178.68.195:34240.service: Deactivated successfully. Mar 17 18:57:13.575835 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:57:13.576629 systemd-logind[1338]: Removed session 17. Mar 17 18:57:13.580088 kernel: audit: type=1106 audit(1742237833.572:477): pid=5212 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.580147 kernel: audit: type=1104 audit(1742237833.572:478): pid=5212 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:13.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.99:22-139.178.68.195:34240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:18.573712 systemd[1]: Started sshd@15-139.178.70.99:22-139.178.68.195:51388.service. Mar 17 18:57:18.578635 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:57:18.578692 kernel: audit: type=1130 audit(1742237838.573:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.99:22-139.178.68.195:51388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:18.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.99:22-139.178.68.195:51388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:18.653000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.654591 sshd[5249]: Accepted publickey for core from 139.178.68.195 port 51388 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:18.659710 kernel: audit: type=1101 audit(1742237838.653:481): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.659000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.661216 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:18.665714 kernel: audit: type=1103 audit(1742237838.659:482): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.666148 kernel: audit: type=1006 audit(1742237838.660:483): pid=5249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Mar 17 18:57:18.660000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd778fdd80 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:18.660000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:18.675161 kernel: audit: type=1300 audit(1742237838.660:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd778fdd80 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:18.675709 kernel: audit: type=1327 audit(1742237838.660:483): proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:18.674905 systemd[1]: Started session-18.scope. Mar 17 18:57:18.675413 systemd-logind[1338]: New session 18 of user core. Mar 17 18:57:18.679000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.683000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.687047 kernel: audit: type=1105 audit(1742237838.679:484): pid=5249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.687077 kernel: audit: type=1103 audit(1742237838.683:485): pid=5252 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.881968 systemd[1]: Started sshd@16-139.178.70.99:22-139.178.68.195:51390.service. Mar 17 18:57:18.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.99:22-139.178.68.195:51390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:18.885687 kernel: audit: type=1130 audit(1742237838.880:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.99:22-139.178.68.195:51390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:18.887322 sshd[5249]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:18.889000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.892254 systemd-logind[1338]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:57:18.893161 systemd[1]: sshd@15-139.178.70.99:22-139.178.68.195:51388.service: Deactivated successfully. Mar 17 18:57:18.893640 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:57:18.889000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.99:22-139.178.68.195:51388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:18.894753 systemd-logind[1338]: Removed session 18. Mar 17 18:57:18.895695 kernel: audit: type=1106 audit(1742237838.889:487): pid=5249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.930000 audit[5259]: USER_ACCT pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.932643 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 51390 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:18.931000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.931000 audit[5259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6b9d4aa0 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:18.931000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:18.933625 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:18.936624 systemd[1]: Started session-19.scope. Mar 17 18:57:18.937375 systemd-logind[1338]: New session 19 of user core. Mar 17 18:57:18.938000 audit[5259]: USER_START pid=5259 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:18.939000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:19.367111 systemd[1]: Started sshd@17-139.178.70.99:22-139.178.68.195:51404.service. Mar 17 18:57:19.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.99:22-139.178.68.195:51404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:19.373053 sshd[5259]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:19.376000 audit[5259]: USER_END pid=5259 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:19.378000 audit[5259]: CRED_DISP pid=5259 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:19.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.99:22-139.178.68.195:51390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:19.382182 systemd[1]: sshd@16-139.178.70.99:22-139.178.68.195:51390.service: Deactivated successfully. Mar 17 18:57:19.382708 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:57:19.383002 systemd-logind[1338]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:57:19.383767 systemd-logind[1338]: Removed session 19. Mar 17 18:57:19.422000 audit[5270]: USER_ACCT pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:19.425296 sshd[5270]: Accepted publickey for core from 139.178.68.195 port 51404 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:19.425000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:19.425000 audit[5270]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc716c02a0 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:19.425000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:19.428117 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:19.432016 systemd[1]: Started session-20.scope. Mar 17 18:57:19.432354 systemd-logind[1338]: New session 20 of user core. Mar 17 18:57:19.434000 audit[5270]: USER_START pid=5270 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:19.435000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:21.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.99:22-139.178.68.195:51406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:21.273000 audit[5270]: USER_END pid=5270 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:21.276000 audit[5270]: CRED_DISP pid=5270 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:21.252042 systemd[1]: Started sshd@18-139.178.70.99:22-139.178.68.195:51406.service. Mar 17 18:57:21.257953 sshd[5270]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:21.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.99:22-139.178.68.195:51404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:21.292311 systemd[1]: sshd@17-139.178.70.99:22-139.178.68.195:51404.service: Deactivated successfully. Mar 17 18:57:21.292863 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:57:21.293809 systemd-logind[1338]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:57:21.294393 systemd-logind[1338]: Removed session 20. Mar 17 18:57:21.345000 audit[5289]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=5289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:21.345000 audit[5289]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffdcb329b10 a2=0 a3=7ffdcb329afc items=0 ppid=2501 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:21.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:21.352000 audit[5289]: NETFILTER_CFG table=nat:116 family=2 entries=22 op=nft_register_rule pid=5289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:21.352000 audit[5289]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdcb329b10 a2=0 a3=0 items=0 ppid=2501 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:21.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:21.361000 audit[5291]: NETFILTER_CFG table=filter:117 family=2 entries=32 op=nft_register_rule pid=5291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:21.361000 audit[5291]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffea69bd6a0 a2=0 a3=7ffea69bd68c items=0 ppid=2501 pid=5291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:21.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:21.362000 audit[5284]: USER_ACCT pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:21.364580 sshd[5284]: Accepted publickey for core from 139.178.68.195 port 51406 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:21.364000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:21.364000 audit[5284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff59fb0d90 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:21.364000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:21.366000 audit[5291]: NETFILTER_CFG table=nat:118 family=2 entries=22 op=nft_register_rule pid=5291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:21.366000 audit[5291]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffea69bd6a0 a2=0 a3=0 items=0 ppid=2501 pid=5291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:21.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:21.366288 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:21.373363 systemd[1]: Started session-21.scope. Mar 17 18:57:21.373708 systemd-logind[1338]: New session 21 of user core. Mar 17 18:57:21.387000 audit[5284]: USER_START pid=5284 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:21.387000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.019435 systemd[1]: Started sshd@19-139.178.70.99:22-139.178.68.195:51418.service. Mar 17 18:57:22.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.99:22-139.178.68.195:51418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:22.021000 audit[5284]: USER_END pid=5284 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.021000 audit[5284]: CRED_DISP pid=5284 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.022793 sshd[5284]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:22.028319 systemd[1]: sshd@18-139.178.70.99:22-139.178.68.195:51406.service: Deactivated successfully. Mar 17 18:57:22.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.99:22-139.178.68.195:51406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:22.029121 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:57:22.029160 systemd-logind[1338]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:57:22.029829 systemd-logind[1338]: Removed session 21. Mar 17 18:57:22.109000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.111710 sshd[5299]: Accepted publickey for core from 139.178.68.195 port 51418 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:22.110000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.110000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb6a39f60 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:22.110000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:22.113485 sshd[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:22.118697 systemd-logind[1338]: New session 22 of user core. Mar 17 18:57:22.118737 systemd[1]: Started session-22.scope. Mar 17 18:57:22.122000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.123000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.357933 sshd[5299]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:22.359000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.359000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:22.362399 systemd[1]: sshd@19-139.178.70.99:22-139.178.68.195:51418.service: Deactivated successfully. Mar 17 18:57:22.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.99:22-139.178.68.195:51418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:22.363022 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:57:22.363065 systemd-logind[1338]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:57:22.363743 systemd-logind[1338]: Removed session 22. Mar 17 18:57:26.011000 audit[5314]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:26.023407 kernel: kauditd_printk_skb: 57 callbacks suppressed Mar 17 18:57:26.028780 kernel: audit: type=1325 audit(1742237846.011:529): table=filter:119 family=2 entries=20 op=nft_register_rule pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:26.029963 kernel: audit: type=1300 audit(1742237846.011:529): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc658fc300 a2=0 a3=7ffc658fc2ec items=0 ppid=2501 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:26.030441 kernel: audit: type=1327 audit(1742237846.011:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:26.030490 kernel: audit: type=1325 audit(1742237846.022:530): table=nat:120 family=2 entries=106 op=nft_register_chain pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:26.031175 kernel: audit: type=1300 audit(1742237846.022:530): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc658fc300 a2=0 a3=7ffc658fc2ec items=0 ppid=2501 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:26.031196 kernel: audit: type=1327 audit(1742237846.022:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:26.011000 audit[5314]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc658fc300 a2=0 a3=7ffc658fc2ec items=0 ppid=2501 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:26.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:26.022000 audit[5314]: NETFILTER_CFG table=nat:120 family=2 entries=106 op=nft_register_chain pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:57:26.022000 audit[5314]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc658fc300 a2=0 a3=7ffc658fc2ec items=0 ppid=2501 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:26.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:57:27.370447 systemd[1]: Started sshd@20-139.178.70.99:22-139.178.68.195:60962.service. Mar 17 18:57:27.381316 kernel: audit: type=1130 audit(1742237847.368:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.99:22-139.178.68.195:60962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:27.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.99:22-139.178.68.195:60962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:27.503000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.507058 sshd[5316]: Accepted publickey for core from 139.178.68.195 port 60962 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:27.510181 kernel: audit: type=1101 audit(1742237847.503:532): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.509000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.514065 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:27.514679 kernel: audit: type=1103 audit(1742237847.509:533): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.514714 kernel: audit: type=1006 audit(1742237847.509:534): pid=5316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Mar 17 18:57:27.509000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecaa78d90 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:27.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:27.531037 systemd-logind[1338]: New session 23 of user core. Mar 17 18:57:27.533000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.533000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.531345 systemd[1]: Started session-23.scope. Mar 17 18:57:27.973869 sshd[5316]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:27.973000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.973000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:27.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.99:22-139.178.68.195:60962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:27.976156 systemd[1]: sshd@20-139.178.70.99:22-139.178.68.195:60962.service: Deactivated successfully. Mar 17 18:57:27.980297 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:57:27.980360 systemd-logind[1338]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:57:27.981711 systemd-logind[1338]: Removed session 23. Mar 17 18:57:32.986175 systemd[1]: Started sshd@21-139.178.70.99:22-139.178.68.195:60976.service. Mar 17 18:57:32.988120 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:57:32.996121 kernel: audit: type=1130 audit(1742237852.984:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.99:22-139.178.68.195:60976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:32.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.99:22-139.178.68.195:60976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:33.117000 audit[5332]: USER_ACCT pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.122108 sshd[5332]: Accepted publickey for core from 139.178.68.195 port 60976 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:33.123035 kernel: audit: type=1101 audit(1742237853.117:541): pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.121000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.127001 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:33.128761 kernel: audit: type=1103 audit(1742237853.121:542): pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.128808 kernel: audit: type=1006 audit(1742237853.121:543): pid=5332 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Mar 17 18:57:33.129693 kernel: audit: type=1300 audit(1742237853.121:543): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff31b45280 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:33.121000 audit[5332]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff31b45280 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:33.121000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:33.134154 kernel: audit: type=1327 audit(1742237853.121:543): proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:33.137154 systemd-logind[1338]: New session 24 of user core. Mar 17 18:57:33.137422 systemd[1]: Started session-24.scope. Mar 17 18:57:33.142000 audit[5332]: USER_START pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.150026 kernel: audit: type=1105 audit(1742237853.142:544): pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.150061 kernel: audit: type=1103 audit(1742237853.146:545): pid=5335 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.146000 audit[5335]: CRED_ACQ pid=5335 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.584635 sshd[5332]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:33.583000 audit[5332]: USER_END pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.589265 systemd[1]: sshd@21-139.178.70.99:22-139.178.68.195:60976.service: Deactivated successfully. Mar 17 18:57:33.590103 kernel: audit: type=1106 audit(1742237853.583:546): pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.590560 kernel: audit: type=1104 audit(1742237853.583:547): pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.583000 audit[5332]: CRED_DISP pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:33.590034 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:57:33.590312 systemd-logind[1338]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:57:33.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.99:22-139.178.68.195:60976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:33.594256 systemd-logind[1338]: Removed session 24. Mar 17 18:57:38.586372 systemd[1]: Started sshd@22-139.178.70.99:22-139.178.68.195:40510.service. Mar 17 18:57:38.587629 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:57:38.588410 kernel: audit: type=1130 audit(1742237858.584:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.99:22-139.178.68.195:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:38.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.99:22-139.178.68.195:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:38.679000 audit[5364]: USER_ACCT pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.683099 sshd[5364]: Accepted publickey for core from 139.178.68.195 port 40510 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA Mar 17 18:57:38.684684 kernel: audit: type=1101 audit(1742237858.679:550): pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.683000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.688602 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:57:38.690141 kernel: audit: type=1103 audit(1742237858.683:551): pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.690184 kernel: audit: type=1006 audit(1742237858.683:552): pid=5364 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Mar 17 18:57:38.690236 kernel: audit: type=1300 audit(1742237858.683:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0a7d7040 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:38.683000 audit[5364]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0a7d7040 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:57:38.683000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:38.694527 kernel: audit: type=1327 audit(1742237858.683:552): proctitle=737368643A20636F7265205B707269765D Mar 17 18:57:38.697633 systemd[1]: Started session-25.scope. Mar 17 18:57:38.697834 systemd-logind[1338]: New session 25 of user core. Mar 17 18:57:38.699000 audit[5364]: USER_START pid=5364 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.705065 kernel: audit: type=1105 audit(1742237858.699:553): pid=5364 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.705103 kernel: audit: type=1103 audit(1742237858.703:554): pid=5367 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:38.703000 audit[5367]: CRED_ACQ pid=5367 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:39.042675 sshd[5364]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:39.041000 audit[5364]: USER_END pid=5364 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:39.044645 systemd[1]: sshd@22-139.178.70.99:22-139.178.68.195:40510.service: Deactivated successfully. Mar 17 18:57:39.045153 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:57:39.047687 kernel: audit: type=1106 audit(1742237859.041:555): pid=5364 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:39.042000 audit[5364]: CRED_DISP pid=5364 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:39.048482 systemd-logind[1338]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:57:39.051681 kernel: audit: type=1104 audit(1742237859.042:556): pid=5364 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Mar 17 18:57:39.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.99:22-139.178.68.195:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:57:39.052160 systemd-logind[1338]: Removed session 25.