Aug 13 00:51:45.674831 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:51:45.674846 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:45.674852 kernel: Disabled fast string operations Aug 13 00:51:45.674856 kernel: BIOS-provided physical RAM map: Aug 13 00:51:45.674861 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Aug 13 00:51:45.674865 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Aug 13 00:51:45.674871 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Aug 13 00:51:45.674875 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Aug 13 00:51:45.674879 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Aug 13 00:51:45.674883 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Aug 13 00:51:45.674887 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Aug 13 00:51:45.674896 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Aug 13 00:51:45.674900 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Aug 13 00:51:45.674904 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 00:51:45.674911 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Aug 13 00:51:45.674915 kernel: NX (Execute Disable) protection: active Aug 13 00:51:45.674920 kernel: SMBIOS 2.7 present. Aug 13 00:51:45.674925 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Aug 13 00:51:45.674929 kernel: vmware: hypercall mode: 0x00 Aug 13 00:51:45.674934 kernel: Hypervisor detected: VMware Aug 13 00:51:45.674939 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Aug 13 00:51:45.674944 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Aug 13 00:51:45.674948 kernel: vmware: using clock offset of 3018839201 ns Aug 13 00:51:45.674952 kernel: tsc: Detected 3408.000 MHz processor Aug 13 00:51:45.674957 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:51:45.674963 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:51:45.674967 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Aug 13 00:51:45.674972 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:51:45.674977 kernel: total RAM covered: 3072M Aug 13 00:51:45.674982 kernel: Found optimal setting for mtrr clean up Aug 13 00:51:45.674987 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Aug 13 00:51:45.674992 kernel: Using GB pages for direct mapping Aug 13 00:51:45.674997 kernel: ACPI: Early table checksum verification disabled Aug 13 00:51:45.675001 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Aug 13 00:51:45.675006 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Aug 13 00:51:45.675011 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Aug 13 00:51:45.675015 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Aug 13 00:51:45.675020 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 00:51:45.675024 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 00:51:45.675030 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Aug 13 00:51:45.675037 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Aug 13 00:51:45.675042 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Aug 13 00:51:45.675047 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Aug 13 00:51:45.675052 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Aug 13 00:51:45.675058 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Aug 13 00:51:45.675063 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Aug 13 00:51:45.675068 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Aug 13 00:51:45.675074 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 00:51:45.675079 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 00:51:45.675083 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Aug 13 00:51:45.675088 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Aug 13 00:51:45.675093 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Aug 13 00:51:45.675099 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Aug 13 00:51:45.675104 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Aug 13 00:51:45.675109 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Aug 13 00:51:45.675114 kernel: system APIC only can use physical flat Aug 13 00:51:45.675119 kernel: Setting APIC routing to physical flat. Aug 13 00:51:45.675124 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:51:45.675129 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 00:51:45.675134 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 00:51:45.675139 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 00:51:45.675144 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 00:51:45.675160 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 00:51:45.675166 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 00:51:45.675170 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 00:51:45.675176 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Aug 13 00:51:45.675180 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Aug 13 00:51:45.675185 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Aug 13 00:51:45.675190 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Aug 13 00:51:45.675195 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Aug 13 00:51:45.675200 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Aug 13 00:51:45.675205 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Aug 13 00:51:45.675211 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Aug 13 00:51:45.675216 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Aug 13 00:51:45.675221 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Aug 13 00:51:45.675226 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Aug 13 00:51:45.675231 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Aug 13 00:51:45.675236 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Aug 13 00:51:45.675241 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Aug 13 00:51:45.675245 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Aug 13 00:51:45.675250 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Aug 13 00:51:45.675255 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Aug 13 00:51:45.675261 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Aug 13 00:51:45.675266 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Aug 13 00:51:45.675271 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Aug 13 00:51:45.675276 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Aug 13 00:51:45.675281 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Aug 13 00:51:45.675286 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Aug 13 00:51:45.675291 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Aug 13 00:51:45.675296 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Aug 13 00:51:45.675301 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Aug 13 00:51:45.675306 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Aug 13 00:51:45.675311 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Aug 13 00:51:45.675316 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Aug 13 00:51:45.675321 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Aug 13 00:51:45.675326 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Aug 13 00:51:45.675331 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Aug 13 00:51:45.675336 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Aug 13 00:51:45.675341 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Aug 13 00:51:45.675346 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Aug 13 00:51:45.675351 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Aug 13 00:51:45.675356 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Aug 13 00:51:45.675362 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Aug 13 00:51:45.675367 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Aug 13 00:51:45.675372 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Aug 13 00:51:45.675377 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Aug 13 00:51:45.675382 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Aug 13 00:51:45.675387 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Aug 13 00:51:45.675391 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Aug 13 00:51:45.675396 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Aug 13 00:51:45.675401 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Aug 13 00:51:45.675406 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Aug 13 00:51:45.675412 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Aug 13 00:51:45.675417 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Aug 13 00:51:45.675422 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Aug 13 00:51:45.675427 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Aug 13 00:51:45.675432 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Aug 13 00:51:45.675437 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Aug 13 00:51:45.675447 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Aug 13 00:51:45.675452 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Aug 13 00:51:45.675457 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Aug 13 00:51:45.675463 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Aug 13 00:51:45.675468 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Aug 13 00:51:45.675474 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Aug 13 00:51:45.675479 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Aug 13 00:51:45.675485 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Aug 13 00:51:45.675490 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Aug 13 00:51:45.675495 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Aug 13 00:51:45.675500 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Aug 13 00:51:45.675506 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Aug 13 00:51:45.675512 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Aug 13 00:51:45.675517 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Aug 13 00:51:45.675522 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Aug 13 00:51:45.675528 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Aug 13 00:51:45.675533 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Aug 13 00:51:45.675538 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Aug 13 00:51:45.675543 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Aug 13 00:51:45.675549 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Aug 13 00:51:45.675554 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Aug 13 00:51:45.675560 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Aug 13 00:51:45.675565 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Aug 13 00:51:45.675571 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Aug 13 00:51:45.675576 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Aug 13 00:51:45.675581 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Aug 13 00:51:45.675586 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Aug 13 00:51:45.675591 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Aug 13 00:51:45.675597 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Aug 13 00:51:45.675602 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Aug 13 00:51:45.675607 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Aug 13 00:51:45.675613 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Aug 13 00:51:45.675618 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Aug 13 00:51:45.675624 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Aug 13 00:51:45.675629 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Aug 13 00:51:45.675634 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Aug 13 00:51:45.675639 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Aug 13 00:51:45.675645 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Aug 13 00:51:45.675650 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Aug 13 00:51:45.675655 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Aug 13 00:51:45.675660 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Aug 13 00:51:45.675667 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Aug 13 00:51:45.675672 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Aug 13 00:51:45.675677 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Aug 13 00:51:45.675682 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Aug 13 00:51:45.675688 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Aug 13 00:51:45.675693 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Aug 13 00:51:45.675698 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Aug 13 00:51:45.675703 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Aug 13 00:51:45.675708 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Aug 13 00:51:45.675714 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Aug 13 00:51:45.675720 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Aug 13 00:51:45.675725 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Aug 13 00:51:45.675730 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Aug 13 00:51:45.675736 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Aug 13 00:51:45.675741 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Aug 13 00:51:45.675746 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Aug 13 00:51:45.675751 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Aug 13 00:51:45.675757 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Aug 13 00:51:45.675762 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Aug 13 00:51:45.675767 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Aug 13 00:51:45.675774 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Aug 13 00:51:45.675779 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Aug 13 00:51:45.675784 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Aug 13 00:51:45.675789 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Aug 13 00:51:45.675795 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Aug 13 00:51:45.675800 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Aug 13 00:51:45.675805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:51:45.675811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:51:45.675816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Aug 13 00:51:45.675822 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Aug 13 00:51:45.675828 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Aug 13 00:51:45.675833 kernel: Zone ranges: Aug 13 00:51:45.675839 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:51:45.675844 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Aug 13 00:51:45.675849 kernel: Normal empty Aug 13 00:51:45.675855 kernel: Movable zone start for each node Aug 13 00:51:45.675860 kernel: Early memory node ranges Aug 13 00:51:45.675865 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Aug 13 00:51:45.675871 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Aug 13 00:51:45.675877 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Aug 13 00:51:45.675883 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Aug 13 00:51:45.675888 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:51:45.675893 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Aug 13 00:51:45.675899 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Aug 13 00:51:45.675904 kernel: ACPI: PM-Timer IO Port: 0x1008 Aug 13 00:51:45.675910 kernel: system APIC only can use physical flat Aug 13 00:51:45.675915 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Aug 13 00:51:45.675920 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 00:51:45.675926 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 00:51:45.675932 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 00:51:45.675937 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 00:51:45.675942 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 00:51:45.675948 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 00:51:45.675953 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 00:51:45.675958 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 00:51:45.675963 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 00:51:45.675969 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 00:51:45.675974 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 00:51:45.675981 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 00:51:45.675986 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 00:51:45.675991 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 00:51:45.675996 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 00:51:45.676002 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 00:51:45.676007 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Aug 13 00:51:45.676012 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Aug 13 00:51:45.676018 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Aug 13 00:51:45.676023 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Aug 13 00:51:45.676029 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Aug 13 00:51:45.676034 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Aug 13 00:51:45.676040 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Aug 13 00:51:45.676045 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Aug 13 00:51:45.676050 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Aug 13 00:51:45.676056 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Aug 13 00:51:45.676061 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Aug 13 00:51:45.676066 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Aug 13 00:51:45.676071 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Aug 13 00:51:45.676076 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Aug 13 00:51:45.676083 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Aug 13 00:51:45.676088 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Aug 13 00:51:45.676093 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Aug 13 00:51:45.676099 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Aug 13 00:51:45.676105 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Aug 13 00:51:45.676110 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Aug 13 00:51:45.676115 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Aug 13 00:51:45.676120 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Aug 13 00:51:45.676126 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Aug 13 00:51:45.676132 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Aug 13 00:51:45.676137 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Aug 13 00:51:45.676143 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Aug 13 00:51:45.676148 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Aug 13 00:51:45.676164 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Aug 13 00:51:45.676170 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Aug 13 00:51:45.676175 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Aug 13 00:51:45.676180 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Aug 13 00:51:45.676186 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Aug 13 00:51:45.676193 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Aug 13 00:51:45.676198 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Aug 13 00:51:45.676203 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Aug 13 00:51:45.676209 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Aug 13 00:51:45.676214 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Aug 13 00:51:45.676219 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Aug 13 00:51:45.676224 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Aug 13 00:51:45.676230 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Aug 13 00:51:45.676235 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Aug 13 00:51:45.676240 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Aug 13 00:51:45.676247 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Aug 13 00:51:45.676252 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Aug 13 00:51:45.676257 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Aug 13 00:51:45.676263 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Aug 13 00:51:45.676268 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Aug 13 00:51:45.676274 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Aug 13 00:51:45.676279 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Aug 13 00:51:45.676284 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Aug 13 00:51:45.676289 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Aug 13 00:51:45.676296 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Aug 13 00:51:45.676301 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Aug 13 00:51:45.676306 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Aug 13 00:51:45.676312 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Aug 13 00:51:45.676317 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Aug 13 00:51:45.676322 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Aug 13 00:51:45.676328 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Aug 13 00:51:45.676333 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Aug 13 00:51:45.676338 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Aug 13 00:51:45.676344 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Aug 13 00:51:45.676350 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Aug 13 00:51:45.676355 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Aug 13 00:51:45.676360 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Aug 13 00:51:45.676366 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Aug 13 00:51:45.676371 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Aug 13 00:51:45.676377 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Aug 13 00:51:45.676382 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Aug 13 00:51:45.676387 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Aug 13 00:51:45.676392 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Aug 13 00:51:45.676399 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Aug 13 00:51:45.676404 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Aug 13 00:51:45.676409 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Aug 13 00:51:45.676415 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Aug 13 00:51:45.676420 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Aug 13 00:51:45.676425 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Aug 13 00:51:45.676430 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Aug 13 00:51:45.676436 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Aug 13 00:51:45.676441 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Aug 13 00:51:45.676446 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Aug 13 00:51:45.676453 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Aug 13 00:51:45.676458 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Aug 13 00:51:45.676463 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Aug 13 00:51:45.676468 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Aug 13 00:51:45.676474 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Aug 13 00:51:45.676479 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Aug 13 00:51:45.676484 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Aug 13 00:51:45.676489 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Aug 13 00:51:45.676495 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Aug 13 00:51:45.676501 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Aug 13 00:51:45.676506 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Aug 13 00:51:45.676512 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Aug 13 00:51:45.676517 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Aug 13 00:51:45.676523 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Aug 13 00:51:45.676528 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Aug 13 00:51:45.676533 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Aug 13 00:51:45.676539 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Aug 13 00:51:45.676544 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Aug 13 00:51:45.676549 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Aug 13 00:51:45.676556 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Aug 13 00:51:45.676561 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Aug 13 00:51:45.676566 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Aug 13 00:51:45.676571 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Aug 13 00:51:45.676577 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Aug 13 00:51:45.676582 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Aug 13 00:51:45.676588 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Aug 13 00:51:45.676593 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Aug 13 00:51:45.676598 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Aug 13 00:51:45.676605 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Aug 13 00:51:45.676610 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Aug 13 00:51:45.676615 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Aug 13 00:51:45.676621 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:51:45.676626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Aug 13 00:51:45.676631 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:51:45.676637 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Aug 13 00:51:45.676642 kernel: TSC deadline timer available Aug 13 00:51:45.676647 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Aug 13 00:51:45.676654 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Aug 13 00:51:45.676659 kernel: Booting paravirtualized kernel on VMware hypervisor Aug 13 00:51:45.676665 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:51:45.676670 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Aug 13 00:51:45.676676 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Aug 13 00:51:45.676681 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Aug 13 00:51:45.676686 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Aug 13 00:51:45.676692 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Aug 13 00:51:45.676697 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Aug 13 00:51:45.676704 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Aug 13 00:51:45.676712 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Aug 13 00:51:45.676719 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Aug 13 00:51:45.676727 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Aug 13 00:51:45.676744 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Aug 13 00:51:45.676752 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Aug 13 00:51:45.676758 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Aug 13 00:51:45.676764 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Aug 13 00:51:45.676769 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Aug 13 00:51:45.676776 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Aug 13 00:51:45.676781 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Aug 13 00:51:45.676787 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Aug 13 00:51:45.676792 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Aug 13 00:51:45.676798 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Aug 13 00:51:45.676804 kernel: Policy zone: DMA32 Aug 13 00:51:45.676811 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:45.676817 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:51:45.676824 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Aug 13 00:51:45.676830 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Aug 13 00:51:45.676836 kernel: printk: log_buf_len min size: 262144 bytes Aug 13 00:51:45.676842 kernel: printk: log_buf_len: 1048576 bytes Aug 13 00:51:45.676848 kernel: printk: early log buf free: 239728(91%) Aug 13 00:51:45.676853 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:51:45.676859 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:51:45.676865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:51:45.676871 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 155976K reserved, 0K cma-reserved) Aug 13 00:51:45.676878 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Aug 13 00:51:45.676884 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:51:45.676890 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:51:45.676897 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:51:45.676903 kernel: rcu: RCU event tracing is enabled. Aug 13 00:51:45.676908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Aug 13 00:51:45.676915 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:51:45.676921 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:51:45.676927 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:51:45.676933 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Aug 13 00:51:45.676938 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Aug 13 00:51:45.676944 kernel: random: crng init done Aug 13 00:51:45.676950 kernel: Console: colour VGA+ 80x25 Aug 13 00:51:45.676955 kernel: printk: console [tty0] enabled Aug 13 00:51:45.676961 kernel: printk: console [ttyS0] enabled Aug 13 00:51:45.676968 kernel: ACPI: Core revision 20210730 Aug 13 00:51:45.676974 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Aug 13 00:51:45.676980 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:51:45.676986 kernel: x2apic enabled Aug 13 00:51:45.676992 kernel: Switched APIC routing to physical x2apic. Aug 13 00:51:45.676997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:51:45.677003 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 00:51:45.677009 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Aug 13 00:51:45.677015 kernel: Disabled fast string operations Aug 13 00:51:45.677023 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:51:45.677032 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:51:45.677041 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:51:45.677048 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:51:45.677054 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 00:51:45.677059 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 00:51:45.677065 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 00:51:45.677071 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 00:51:45.677077 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 00:51:45.677084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:51:45.677090 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:51:45.677096 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:51:45.677101 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 00:51:45.677107 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 00:51:45.677113 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:51:45.677118 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:51:45.677124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:51:45.677131 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:51:45.677137 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:51:45.677142 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 00:51:45.677148 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:51:45.677247 kernel: pid_max: default: 131072 minimum: 1024 Aug 13 00:51:45.677254 kernel: LSM: Security Framework initializing Aug 13 00:51:45.677260 kernel: SELinux: Initializing. Aug 13 00:51:45.677266 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:51:45.677272 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:51:45.677280 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 00:51:45.677286 kernel: Performance Events: Skylake events, core PMU driver. Aug 13 00:51:45.677292 kernel: core: CPUID marked event: 'cpu cycles' unavailable Aug 13 00:51:45.677297 kernel: core: CPUID marked event: 'instructions' unavailable Aug 13 00:51:45.677303 kernel: core: CPUID marked event: 'bus cycles' unavailable Aug 13 00:51:45.677309 kernel: core: CPUID marked event: 'cache references' unavailable Aug 13 00:51:45.677314 kernel: core: CPUID marked event: 'cache misses' unavailable Aug 13 00:51:45.677320 kernel: core: CPUID marked event: 'branch instructions' unavailable Aug 13 00:51:45.677327 kernel: core: CPUID marked event: 'branch misses' unavailable Aug 13 00:51:45.677333 kernel: ... version: 1 Aug 13 00:51:45.677339 kernel: ... bit width: 48 Aug 13 00:51:45.677345 kernel: ... generic registers: 4 Aug 13 00:51:45.677351 kernel: ... value mask: 0000ffffffffffff Aug 13 00:51:45.677356 kernel: ... max period: 000000007fffffff Aug 13 00:51:45.677362 kernel: ... fixed-purpose events: 0 Aug 13 00:51:45.677368 kernel: ... event mask: 000000000000000f Aug 13 00:51:45.677373 kernel: signal: max sigframe size: 1776 Aug 13 00:51:45.677379 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:51:45.677386 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:51:45.677392 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:51:45.677397 kernel: x86: Booting SMP configuration: Aug 13 00:51:45.677403 kernel: .... node #0, CPUs: #1 Aug 13 00:51:45.677409 kernel: Disabled fast string operations Aug 13 00:51:45.677414 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Aug 13 00:51:45.677420 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 00:51:45.677426 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:51:45.677431 kernel: smpboot: Max logical packages: 128 Aug 13 00:51:45.677437 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Aug 13 00:51:45.677444 kernel: devtmpfs: initialized Aug 13 00:51:45.677450 kernel: x86/mm: Memory block size: 128MB Aug 13 00:51:45.677456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Aug 13 00:51:45.677462 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:51:45.677468 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Aug 13 00:51:45.677474 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:51:45.677479 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:51:45.677485 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:51:45.677491 kernel: audit: type=2000 audit(1755046304.085:1): state=initialized audit_enabled=0 res=1 Aug 13 00:51:45.677498 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:51:45.677503 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:51:45.677509 kernel: cpuidle: using governor menu Aug 13 00:51:45.677515 kernel: Simple Boot Flag at 0x36 set to 0x80 Aug 13 00:51:45.677521 kernel: ACPI: bus type PCI registered Aug 13 00:51:45.677527 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:51:45.677533 kernel: dca service started, version 1.12.1 Aug 13 00:51:45.677538 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Aug 13 00:51:45.677544 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Aug 13 00:51:45.677551 kernel: PCI: Using configuration type 1 for base access Aug 13 00:51:45.677557 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:51:45.677562 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:51:45.677568 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:51:45.677574 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:51:45.677579 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:51:45.677585 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:51:45.677591 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:51:45.677596 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:51:45.677603 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:51:45.677609 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:51:45.677615 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Aug 13 00:51:45.677621 kernel: ACPI: Interpreter enabled Aug 13 00:51:45.677626 kernel: ACPI: PM: (supports S0 S1 S5) Aug 13 00:51:45.677632 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:51:45.677638 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:51:45.677643 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Aug 13 00:51:45.677649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Aug 13 00:51:45.677733 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:51:45.677784 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Aug 13 00:51:45.677852 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Aug 13 00:51:45.677863 kernel: PCI host bridge to bus 0000:00 Aug 13 00:51:45.677918 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:51:45.677963 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Aug 13 00:51:45.678009 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:51:45.678051 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:51:45.678093 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Aug 13 00:51:45.678134 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Aug 13 00:51:45.687464 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Aug 13 00:51:45.687540 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Aug 13 00:51:45.687599 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Aug 13 00:51:45.687658 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Aug 13 00:51:45.687708 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Aug 13 00:51:45.687756 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 00:51:45.687804 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 00:51:45.687852 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 00:51:45.687906 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 00:51:45.687964 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Aug 13 00:51:45.688012 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Aug 13 00:51:45.688060 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Aug 13 00:51:45.688115 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Aug 13 00:51:45.688175 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Aug 13 00:51:45.688224 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Aug 13 00:51:45.688280 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Aug 13 00:51:45.688328 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Aug 13 00:51:45.688376 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Aug 13 00:51:45.688424 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Aug 13 00:51:45.688470 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Aug 13 00:51:45.688518 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:51:45.688571 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Aug 13 00:51:45.688625 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.688674 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.688726 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.688780 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.688833 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.688881 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.688934 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.688986 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689037 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689086 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689140 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689203 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689257 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689307 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689359 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689407 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689459 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689507 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689562 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689610 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689664 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689711 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689763 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689811 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689863 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.689931 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.689988 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.690036 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.690086 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.690134 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.697920 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.697990 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698047 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698098 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698163 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698216 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698268 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698322 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698376 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698425 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698481 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698531 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698583 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698634 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698686 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698734 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698784 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698832 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698883 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.698932 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.698983 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.699032 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.699084 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.699133 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.701271 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.701333 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.701388 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.701438 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.701489 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.701539 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.701594 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.701643 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.701698 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Aug 13 00:51:45.701747 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.701799 kernel: pci_bus 0000:01: extended config space not accessible Aug 13 00:51:45.701850 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 00:51:45.701914 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 00:51:45.701924 kernel: acpiphp: Slot [32] registered Aug 13 00:51:45.701932 kernel: acpiphp: Slot [33] registered Aug 13 00:51:45.701938 kernel: acpiphp: Slot [34] registered Aug 13 00:51:45.701944 kernel: acpiphp: Slot [35] registered Aug 13 00:51:45.701950 kernel: acpiphp: Slot [36] registered Aug 13 00:51:45.701956 kernel: acpiphp: Slot [37] registered Aug 13 00:51:45.701961 kernel: acpiphp: Slot [38] registered Aug 13 00:51:45.701967 kernel: acpiphp: Slot [39] registered Aug 13 00:51:45.701973 kernel: acpiphp: Slot [40] registered Aug 13 00:51:45.701979 kernel: acpiphp: Slot [41] registered Aug 13 00:51:45.701985 kernel: acpiphp: Slot [42] registered Aug 13 00:51:45.701991 kernel: acpiphp: Slot [43] registered Aug 13 00:51:45.701997 kernel: acpiphp: Slot [44] registered Aug 13 00:51:45.702003 kernel: acpiphp: Slot [45] registered Aug 13 00:51:45.702009 kernel: acpiphp: Slot [46] registered Aug 13 00:51:45.702014 kernel: acpiphp: Slot [47] registered Aug 13 00:51:45.702020 kernel: acpiphp: Slot [48] registered Aug 13 00:51:45.702026 kernel: acpiphp: Slot [49] registered Aug 13 00:51:45.702032 kernel: acpiphp: Slot [50] registered Aug 13 00:51:45.702038 kernel: acpiphp: Slot [51] registered Aug 13 00:51:45.702044 kernel: acpiphp: Slot [52] registered Aug 13 00:51:45.702050 kernel: acpiphp: Slot [53] registered Aug 13 00:51:45.702056 kernel: acpiphp: Slot [54] registered Aug 13 00:51:45.702061 kernel: acpiphp: Slot [55] registered Aug 13 00:51:45.702067 kernel: acpiphp: Slot [56] registered Aug 13 00:51:45.702073 kernel: acpiphp: Slot [57] registered Aug 13 00:51:45.702079 kernel: acpiphp: Slot [58] registered Aug 13 00:51:45.702085 kernel: acpiphp: Slot [59] registered Aug 13 00:51:45.702091 kernel: acpiphp: Slot [60] registered Aug 13 00:51:45.702097 kernel: acpiphp: Slot [61] registered Aug 13 00:51:45.702103 kernel: acpiphp: Slot [62] registered Aug 13 00:51:45.702109 kernel: acpiphp: Slot [63] registered Aug 13 00:51:45.702169 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Aug 13 00:51:45.702221 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 00:51:45.702269 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 00:51:45.702316 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 00:51:45.702364 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Aug 13 00:51:45.702411 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Aug 13 00:51:45.702462 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Aug 13 00:51:45.702509 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Aug 13 00:51:45.702556 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Aug 13 00:51:45.702610 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Aug 13 00:51:45.702660 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Aug 13 00:51:45.702709 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Aug 13 00:51:45.702758 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 00:51:45.702808 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 00:51:45.702857 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 00:51:45.702907 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 00:51:45.702954 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 00:51:45.703002 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 00:51:45.703054 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 00:51:45.703116 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 00:51:45.705207 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 00:51:45.705271 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 00:51:45.705326 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 00:51:45.705377 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 00:51:45.705426 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 00:51:45.705475 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 00:51:45.705527 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 00:51:45.705575 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 00:51:45.705627 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 00:51:45.705676 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 00:51:45.705723 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 00:51:45.705771 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 00:51:45.705822 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 00:51:45.705870 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 00:51:45.705918 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 00:51:45.705968 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 00:51:45.706016 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 00:51:45.706062 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 00:51:45.706112 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 00:51:45.706178 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 00:51:45.706232 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 00:51:45.706289 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Aug 13 00:51:45.706340 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Aug 13 00:51:45.706389 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Aug 13 00:51:45.706438 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Aug 13 00:51:45.706487 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Aug 13 00:51:45.706536 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 00:51:45.706587 kernel: pci 0000:0b:00.0: supports D1 D2 Aug 13 00:51:45.706636 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:51:45.706685 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 00:51:45.706734 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 00:51:45.706783 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 00:51:45.706830 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 00:51:45.706880 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 00:51:45.706928 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 00:51:45.706978 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 00:51:45.707027 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 00:51:45.707078 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 00:51:45.707125 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 00:51:45.709223 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 00:51:45.709288 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 00:51:45.709343 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 00:51:45.709394 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 00:51:45.709446 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 00:51:45.709496 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 00:51:45.709544 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 00:51:45.709592 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 00:51:45.709642 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 00:51:45.709690 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 00:51:45.709737 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 00:51:45.709785 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 00:51:45.709835 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 00:51:45.709882 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 00:51:45.709943 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 00:51:45.709991 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 00:51:45.710039 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 00:51:45.710088 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 00:51:45.710136 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 00:51:45.710191 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 00:51:45.710242 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 00:51:45.710291 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 00:51:45.710339 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 00:51:45.710387 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 00:51:45.710434 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 00:51:45.710484 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 00:51:45.710532 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 00:51:45.710581 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 00:51:45.710629 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 00:51:45.710679 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 00:51:45.710727 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 00:51:45.710774 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 00:51:45.710824 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 00:51:45.710872 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 00:51:45.710920 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 00:51:45.710971 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 00:51:45.711019 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 00:51:45.711067 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 00:51:45.711116 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 00:51:45.711170 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 00:51:45.711218 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 00:51:45.711268 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 00:51:45.711316 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 00:51:45.711365 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 00:51:45.711415 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 00:51:45.711463 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 00:51:45.711510 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 00:51:45.711556 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 00:51:45.711605 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 00:51:45.711653 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 00:51:45.711700 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 00:51:45.711750 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 00:51:45.711800 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 00:51:45.711847 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 00:51:45.711903 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 00:51:45.711970 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 00:51:45.712020 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 00:51:45.712068 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 00:51:45.712119 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 00:51:45.712177 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 00:51:45.712225 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 00:51:45.712273 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 00:51:45.712321 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 00:51:45.712367 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 00:51:45.712416 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 00:51:45.712463 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 00:51:45.712510 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 00:51:45.712563 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 00:51:45.712611 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 00:51:45.712658 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 00:51:45.712667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Aug 13 00:51:45.712673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Aug 13 00:51:45.712679 kernel: ACPI: PCI: Interrupt link LNKB disabled Aug 13 00:51:45.712685 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:51:45.712691 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Aug 13 00:51:45.712699 kernel: iommu: Default domain type: Translated Aug 13 00:51:45.712705 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:51:45.712753 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Aug 13 00:51:45.712800 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:51:45.712848 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Aug 13 00:51:45.712856 kernel: vgaarb: loaded Aug 13 00:51:45.712862 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:51:45.712868 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:51:45.712874 kernel: PTP clock support registered Aug 13 00:51:45.712882 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:51:45.712888 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:51:45.712894 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Aug 13 00:51:45.712900 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Aug 13 00:51:45.712905 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Aug 13 00:51:45.712911 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Aug 13 00:51:45.712917 kernel: clocksource: Switched to clocksource tsc-early Aug 13 00:51:45.712923 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:51:45.712929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:51:45.712936 kernel: pnp: PnP ACPI init Aug 13 00:51:45.712989 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Aug 13 00:51:45.713056 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Aug 13 00:51:45.713103 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Aug 13 00:51:45.713164 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Aug 13 00:51:45.713220 kernel: pnp 00:06: [dma 2] Aug 13 00:51:45.713271 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Aug 13 00:51:45.713318 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Aug 13 00:51:45.713362 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Aug 13 00:51:45.713370 kernel: pnp: PnP ACPI: found 8 devices Aug 13 00:51:45.713377 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:51:45.713383 kernel: NET: Registered PF_INET protocol family Aug 13 00:51:45.713389 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:51:45.713395 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:51:45.713401 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:51:45.713408 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:51:45.713415 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 00:51:45.713420 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:51:45.713426 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:51:45.713432 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:51:45.713438 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:51:45.713444 kernel: NET: Registered PF_XDP protocol family Aug 13 00:51:45.713497 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Aug 13 00:51:45.713551 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 00:51:45.713602 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 00:51:45.713652 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 00:51:45.713701 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 00:51:45.713750 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Aug 13 00:51:45.713800 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Aug 13 00:51:45.713852 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Aug 13 00:51:45.713901 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Aug 13 00:51:45.713950 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Aug 13 00:51:45.713999 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Aug 13 00:51:45.714048 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Aug 13 00:51:45.714097 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Aug 13 00:51:45.714147 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Aug 13 00:51:45.714213 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Aug 13 00:51:45.714263 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Aug 13 00:51:45.714312 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Aug 13 00:51:45.714360 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Aug 13 00:51:45.714408 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Aug 13 00:51:45.714459 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Aug 13 00:51:45.714507 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Aug 13 00:51:45.714557 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Aug 13 00:51:45.714604 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Aug 13 00:51:45.714653 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 00:51:45.714701 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 00:51:45.714752 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.714820 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.714876 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.714929 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.714979 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715029 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715107 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715268 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715325 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715391 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715442 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715490 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715538 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715586 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715634 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715683 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715734 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715781 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715830 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.715877 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.715924 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716023 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716092 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716140 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716204 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716253 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716301 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716348 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716397 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716445 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716493 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716541 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716593 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716640 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716688 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716735 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716783 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716831 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716881 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.716929 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.716979 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.717028 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.717389 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.717473 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.717537 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.720603 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.720679 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.720734 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722142 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722223 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722282 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722337 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722390 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722443 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722496 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722548 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722601 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722652 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722717 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722773 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722828 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722880 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.722934 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.722985 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723037 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723089 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723141 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723208 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723264 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723316 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723369 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723421 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723475 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723526 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723581 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723635 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723689 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723743 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723797 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723849 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.723903 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.723955 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.724009 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 00:51:45.724061 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 00:51:45.724115 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 00:51:45.724175 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Aug 13 00:51:45.724229 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 00:51:45.724283 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 00:51:45.724335 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 00:51:45.724393 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Aug 13 00:51:45.724447 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 00:51:45.724499 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 00:51:45.724550 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 00:51:45.724602 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 00:51:45.724656 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 00:51:45.724710 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 00:51:45.724762 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 00:51:45.724814 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 00:51:45.724867 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 00:51:45.724926 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 00:51:45.724978 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 00:51:45.725029 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 00:51:45.725081 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 00:51:45.725133 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 00:51:45.725200 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 00:51:45.725255 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 00:51:45.725308 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 00:51:45.725360 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 00:51:45.725415 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 00:51:45.725467 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 00:51:45.725522 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 00:51:45.725576 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 00:51:45.725629 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 00:51:45.725681 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 00:51:45.725734 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 00:51:45.725787 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 00:51:45.725839 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 00:51:45.725897 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Aug 13 00:51:45.725968 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 00:51:45.726026 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 00:51:45.726078 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 00:51:45.726132 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 00:51:45.726550 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 00:51:45.726606 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 00:51:45.726656 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 00:51:45.726704 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 00:51:45.726754 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 00:51:45.726802 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 00:51:45.726849 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 00:51:45.726900 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 00:51:45.726948 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 00:51:45.726996 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 00:51:45.727043 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 00:51:45.727091 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 00:51:45.727138 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 00:51:45.727194 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 00:51:45.727243 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 00:51:45.727291 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 00:51:45.727342 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 00:51:45.727391 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 00:51:45.727441 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 00:51:45.727488 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 00:51:45.727536 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 00:51:45.727584 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 00:51:45.727633 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 00:51:45.727682 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 00:51:45.727730 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 00:51:45.727791 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 00:51:45.727854 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 00:51:45.727911 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 00:51:45.727961 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 00:51:45.728035 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 00:51:45.728291 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 00:51:45.728346 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 00:51:45.728395 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 00:51:45.728442 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 00:51:45.728490 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 00:51:45.728541 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 00:51:45.728589 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 00:51:45.728636 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 00:51:45.728686 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 00:51:45.728733 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 00:51:45.728780 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 00:51:45.728828 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 00:51:45.728875 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 00:51:45.728922 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 00:51:45.728971 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 00:51:45.729020 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 00:51:45.729068 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 00:51:45.729116 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 00:51:45.729183 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 00:51:45.729234 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 00:51:45.729284 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 00:51:45.729332 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 00:51:45.729379 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 00:51:45.729427 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 00:51:45.729480 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 00:51:45.729527 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 00:51:45.729575 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 00:51:45.729622 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 00:51:45.729671 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 00:51:45.729718 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 00:51:45.729766 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 00:51:45.729814 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 00:51:45.729862 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 00:51:45.729909 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 00:51:45.729961 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 00:51:45.730008 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 00:51:45.730055 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 00:51:45.730104 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 00:51:45.730196 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 00:51:45.730252 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 00:51:45.730302 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 00:51:45.730348 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 00:51:45.730395 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 00:51:45.730446 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 00:51:45.730494 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 00:51:45.730541 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 00:51:45.730588 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 00:51:45.730631 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 00:51:45.730673 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 00:51:45.730715 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Aug 13 00:51:45.730756 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Aug 13 00:51:45.730802 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Aug 13 00:51:45.730849 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Aug 13 00:51:45.730898 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 00:51:45.730948 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 00:51:45.730992 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 00:51:45.731036 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 00:51:45.731126 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Aug 13 00:51:45.731204 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Aug 13 00:51:45.731263 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Aug 13 00:51:45.731309 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Aug 13 00:51:45.731353 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 00:51:45.731403 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Aug 13 00:51:45.731448 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Aug 13 00:51:45.731492 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 00:51:45.731541 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Aug 13 00:51:45.731588 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Aug 13 00:51:45.731632 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 00:51:45.731682 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Aug 13 00:51:45.731727 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 00:51:45.731775 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Aug 13 00:51:45.731820 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 00:51:45.731872 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Aug 13 00:51:45.731924 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 00:51:45.731986 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Aug 13 00:51:45.732031 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 00:51:45.732081 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Aug 13 00:51:45.732125 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 00:51:45.732198 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Aug 13 00:51:45.732567 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Aug 13 00:51:45.732617 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 00:51:45.732667 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Aug 13 00:51:45.732853 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Aug 13 00:51:45.732908 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 00:51:45.732967 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Aug 13 00:51:45.733016 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Aug 13 00:51:45.733403 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 00:51:45.733460 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Aug 13 00:51:45.733507 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 00:51:45.733698 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Aug 13 00:51:45.733749 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 00:51:45.733802 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Aug 13 00:51:45.734141 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 00:51:45.734217 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Aug 13 00:51:45.734265 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 00:51:45.734315 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Aug 13 00:51:45.734582 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 00:51:45.734638 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Aug 13 00:51:45.734685 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Aug 13 00:51:45.734751 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 00:51:45.735013 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Aug 13 00:51:45.735066 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Aug 13 00:51:45.735113 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 00:51:45.736066 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Aug 13 00:51:45.736125 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Aug 13 00:51:45.736193 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 00:51:45.736554 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Aug 13 00:51:45.736607 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 00:51:45.736662 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Aug 13 00:51:45.736709 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 00:51:45.737067 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Aug 13 00:51:45.737115 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 00:51:45.737208 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Aug 13 00:51:45.737257 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 00:51:45.737306 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Aug 13 00:51:45.737355 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 00:51:45.737552 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Aug 13 00:51:45.737602 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Aug 13 00:51:45.737647 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 00:51:45.737881 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Aug 13 00:51:45.737936 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Aug 13 00:51:45.737982 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 00:51:45.738032 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Aug 13 00:51:45.738085 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 00:51:45.738136 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Aug 13 00:51:45.738451 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 00:51:45.738690 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Aug 13 00:51:45.738742 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 00:51:45.738796 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Aug 13 00:51:45.738843 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 00:51:45.738908 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Aug 13 00:51:45.738954 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 00:51:45.739004 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Aug 13 00:51:45.739048 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 00:51:45.739103 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:51:45.739114 kernel: PCI: CLS 32 bytes, default 64 Aug 13 00:51:45.739122 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:51:45.739129 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 00:51:45.739135 kernel: clocksource: Switched to clocksource tsc Aug 13 00:51:45.739142 kernel: Initialise system trusted keyrings Aug 13 00:51:45.739148 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:51:45.739405 kernel: Key type asymmetric registered Aug 13 00:51:45.739413 kernel: Asymmetric key parser 'x509' registered Aug 13 00:51:45.739422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:51:45.739428 kernel: io scheduler mq-deadline registered Aug 13 00:51:45.739434 kernel: io scheduler kyber registered Aug 13 00:51:45.739441 kernel: io scheduler bfq registered Aug 13 00:51:45.739502 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Aug 13 00:51:45.739643 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.739697 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Aug 13 00:51:45.739747 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.739986 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Aug 13 00:51:45.740042 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.740094 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Aug 13 00:51:45.740148 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.740502 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Aug 13 00:51:45.740749 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.740809 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Aug 13 00:51:45.740861 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.740912 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Aug 13 00:51:45.740967 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.741018 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Aug 13 00:51:45.741068 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.741121 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Aug 13 00:51:45.741426 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.741484 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Aug 13 00:51:45.741630 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.741685 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Aug 13 00:51:45.741736 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.741983 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Aug 13 00:51:45.742041 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.742094 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Aug 13 00:51:45.742199 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.742497 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Aug 13 00:51:45.742553 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.742609 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Aug 13 00:51:45.742661 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.742712 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Aug 13 00:51:45.742762 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.742812 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Aug 13 00:51:45.742861 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.742914 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Aug 13 00:51:45.742963 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743012 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Aug 13 00:51:45.743061 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743110 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Aug 13 00:51:45.743351 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743412 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Aug 13 00:51:45.743462 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743513 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Aug 13 00:51:45.743573 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743627 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Aug 13 00:51:45.743679 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743729 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Aug 13 00:51:45.743779 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743829 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Aug 13 00:51:45.743878 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.743927 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Aug 13 00:51:45.743975 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.744028 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Aug 13 00:51:45.744077 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.744126 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Aug 13 00:51:45.744449 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.744505 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Aug 13 00:51:45.744649 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.744703 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Aug 13 00:51:45.744753 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.745111 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Aug 13 00:51:45.745191 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.745249 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Aug 13 00:51:45.745300 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 00:51:45.745310 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:51:45.745316 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:51:45.745323 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:51:45.745329 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Aug 13 00:51:45.745336 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:51:45.745342 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:51:45.745396 kernel: rtc_cmos 00:01: registered as rtc0 Aug 13 00:51:45.745443 kernel: rtc_cmos 00:01: setting system clock to 2025-08-13T00:51:45 UTC (1755046305) Aug 13 00:51:45.745486 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Aug 13 00:51:45.745495 kernel: intel_pstate: CPU model not supported Aug 13 00:51:45.745501 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:51:45.745507 kernel: Segment Routing with IPv6 Aug 13 00:51:45.745514 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:51:45.745520 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:51:45.745528 kernel: Key type dns_resolver registered Aug 13 00:51:45.745534 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:51:45.745540 kernel: IPI shorthand broadcast: enabled Aug 13 00:51:45.745547 kernel: sched_clock: Marking stable (860322706, 222178272)->(1146343054, -63842076) Aug 13 00:51:45.745553 kernel: registered taskstats version 1 Aug 13 00:51:45.745559 kernel: Loading compiled-in X.509 certificates Aug 13 00:51:45.745566 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:51:45.745572 kernel: Key type .fscrypt registered Aug 13 00:51:45.745578 kernel: Key type fscrypt-provisioning registered Aug 13 00:51:45.745585 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:51:45.745591 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:51:45.745597 kernel: ima: No architecture policies found Aug 13 00:51:45.745605 kernel: clk: Disabling unused clocks Aug 13 00:51:45.745611 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:51:45.745617 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:51:45.745624 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:51:45.745630 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:51:45.745636 kernel: Run /init as init process Aug 13 00:51:45.745643 kernel: with arguments: Aug 13 00:51:45.745649 kernel: /init Aug 13 00:51:45.745655 kernel: with environment: Aug 13 00:51:45.745661 kernel: HOME=/ Aug 13 00:51:45.745667 kernel: TERM=linux Aug 13 00:51:45.745673 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:51:45.745681 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:45.745689 systemd[1]: Detected virtualization vmware. Aug 13 00:51:45.745697 systemd[1]: Detected architecture x86-64. Aug 13 00:51:45.745703 systemd[1]: Running in initrd. Aug 13 00:51:45.745709 systemd[1]: No hostname configured, using default hostname. Aug 13 00:51:45.745716 systemd[1]: Hostname set to . Aug 13 00:51:45.745722 systemd[1]: Initializing machine ID from random generator. Aug 13 00:51:45.745729 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:51:45.745735 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:45.745742 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:45.745749 systemd[1]: Reached target paths.target. Aug 13 00:51:45.745755 systemd[1]: Reached target slices.target. Aug 13 00:51:45.745762 systemd[1]: Reached target swap.target. Aug 13 00:51:45.745768 systemd[1]: Reached target timers.target. Aug 13 00:51:45.745775 systemd[1]: Listening on iscsid.socket. Aug 13 00:51:45.745781 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:51:45.745787 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:51:45.745794 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:51:45.745801 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:51:45.745807 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:45.745814 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:45.745820 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:45.745826 systemd[1]: Reached target sockets.target. Aug 13 00:51:45.745832 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:45.745839 systemd[1]: Finished network-cleanup.service. Aug 13 00:51:45.745845 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:51:45.745852 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:45.745859 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:45.745865 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:45.745871 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:51:45.745878 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:45.745884 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:51:45.745891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:51:45.745897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:51:45.745904 kernel: audit: type=1130 audit(1755046305.677:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.745911 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:51:45.745918 kernel: audit: type=1130 audit(1755046305.680:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.745924 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:51:45.745930 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:51:45.745937 kernel: audit: type=1130 audit(1755046305.692:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.745943 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:51:45.745949 systemd[1]: Started systemd-resolved.service. Aug 13 00:51:45.745955 systemd[1]: Reached target nss-lookup.target. Aug 13 00:51:45.745964 kernel: audit: type=1130 audit(1755046305.721:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.745970 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:51:45.745976 kernel: Bridge firewalling registered Aug 13 00:51:45.745987 systemd-journald[217]: Journal started Aug 13 00:51:45.746021 systemd-journald[217]: Runtime Journal (/run/log/journal/90ddeeeaa5484367966970c29ccf87a9) is 4.8M, max 38.8M, 34.0M free. Aug 13 00:51:45.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.668366 systemd-modules-load[218]: Inserted module 'overlay' Aug 13 00:51:45.718097 systemd-resolved[219]: Positive Trust Anchors: Aug 13 00:51:45.718110 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:45.718131 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:45.721448 systemd-resolved[219]: Defaulting to hostname 'linux'. Aug 13 00:51:45.732234 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 13 00:51:45.747665 dracut-cmdline[233]: dracut-dracut-053 Aug 13 00:51:45.747665 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Aug 13 00:51:45.747665 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:45.750243 systemd[1]: Started systemd-journald.service. Aug 13 00:51:45.750257 kernel: SCSI subsystem initialized Aug 13 00:51:45.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.753162 kernel: audit: type=1130 audit(1755046305.749:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.761715 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:51:45.761754 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:51:45.763102 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:51:45.766733 systemd-modules-load[218]: Inserted module 'dm_multipath' Aug 13 00:51:45.770160 kernel: audit: type=1130 audit(1755046305.766:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.767127 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:45.767657 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:45.779997 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:51:45.780125 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:45.782714 kernel: audit: type=1130 audit(1755046305.779:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.791171 kernel: iscsi: registered transport (tcp) Aug 13 00:51:45.808395 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:51:45.808442 kernel: QLogic iSCSI HBA Driver Aug 13 00:51:45.826086 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:51:45.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.826887 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:51:45.830174 kernel: audit: type=1130 audit(1755046305.825:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.869203 kernel: raid6: avx2x4 gen() 42760 MB/s Aug 13 00:51:45.884177 kernel: raid6: avx2x4 xor() 16907 MB/s Aug 13 00:51:45.901172 kernel: raid6: avx2x2 gen() 50897 MB/s Aug 13 00:51:45.918295 kernel: raid6: avx2x2 xor() 31266 MB/s Aug 13 00:51:45.935175 kernel: raid6: avx2x1 gen() 40346 MB/s Aug 13 00:51:45.952175 kernel: raid6: avx2x1 xor() 27007 MB/s Aug 13 00:51:45.969212 kernel: raid6: sse2x4 gen() 20759 MB/s Aug 13 00:51:45.986183 kernel: raid6: sse2x4 xor() 11466 MB/s Aug 13 00:51:46.003205 kernel: raid6: sse2x2 gen() 21275 MB/s Aug 13 00:51:46.020170 kernel: raid6: sse2x2 xor() 13166 MB/s Aug 13 00:51:46.037171 kernel: raid6: sse2x1 gen() 17922 MB/s Aug 13 00:51:46.054372 kernel: raid6: sse2x1 xor() 8849 MB/s Aug 13 00:51:46.054415 kernel: raid6: using algorithm avx2x2 gen() 50897 MB/s Aug 13 00:51:46.054424 kernel: raid6: .... xor() 31266 MB/s, rmw enabled Aug 13 00:51:46.055559 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:51:46.064168 kernel: xor: automatically using best checksumming function avx Aug 13 00:51:46.127170 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:51:46.132724 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:51:46.133625 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:46.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.138167 kernel: audit: type=1130 audit(1755046306.131:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.132000 audit: BPF prog-id=7 op=LOAD Aug 13 00:51:46.132000 audit: BPF prog-id=8 op=LOAD Aug 13 00:51:46.145495 systemd-udevd[416]: Using default interface naming scheme 'v252'. Aug 13 00:51:46.148264 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:46.149096 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:51:46.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.160586 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Aug 13 00:51:46.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.178042 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:51:46.178660 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:46.242854 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:46.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.301165 kernel: VMware PVSCSI driver - version 1.0.7.0-k Aug 13 00:51:46.302367 kernel: vmw_pvscsi: using 64bit dma Aug 13 00:51:46.302391 kernel: vmw_pvscsi: max_id: 16 Aug 13 00:51:46.302399 kernel: vmw_pvscsi: setting ring_pages to 8 Aug 13 00:51:46.305166 kernel: vmw_pvscsi: enabling reqCallThreshold Aug 13 00:51:46.305194 kernel: vmw_pvscsi: driver-based request coalescing enabled Aug 13 00:51:46.305203 kernel: vmw_pvscsi: using MSI-X Aug 13 00:51:46.307852 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Aug 13 00:51:46.307961 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Aug 13 00:51:46.311265 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Aug 13 00:51:46.326163 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Aug 13 00:51:46.331162 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Aug 13 00:51:46.334941 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Aug 13 00:51:46.335650 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:51:46.339160 kernel: libata version 3.00 loaded. Aug 13 00:51:46.342159 kernel: ata_piix 0000:00:07.1: version 2.13 Aug 13 00:51:46.358135 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:51:46.358147 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Aug 13 00:51:46.358222 kernel: AES CTR mode by8 optimization enabled Aug 13 00:51:46.358231 kernel: scsi host1: ata_piix Aug 13 00:51:46.358293 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Aug 13 00:51:46.367691 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:51:46.367759 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Aug 13 00:51:46.367824 kernel: sd 0:0:0:0: [sda] Cache data unavailable Aug 13 00:51:46.367883 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Aug 13 00:51:46.367942 kernel: scsi host2: ata_piix Aug 13 00:51:46.368001 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Aug 13 00:51:46.368009 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Aug 13 00:51:46.368016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:46.368023 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:51:46.525173 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Aug 13 00:51:46.528174 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Aug 13 00:51:46.554210 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Aug 13 00:51:46.570665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:51:46.570677 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (469) Aug 13 00:51:46.570686 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:51:46.562177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:51:46.565876 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:51:46.571654 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:51:46.571957 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:51:46.572656 systemd[1]: Starting disk-uuid.service... Aug 13 00:51:46.576647 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:46.598170 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:46.603172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:46.607169 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:47.608745 disk-uuid[550]: The operation has completed successfully. Aug 13 00:51:47.609161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:47.644592 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:51:47.644652 systemd[1]: Finished disk-uuid.service. Aug 13 00:51:47.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.647114 systemd[1]: Starting verity-setup.service... Aug 13 00:51:47.657171 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:51:47.700214 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:51:47.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.701234 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:51:47.701411 systemd[1]: Finished verity-setup.service. Aug 13 00:51:47.756179 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:51:47.756516 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:51:47.757091 systemd[1]: Starting afterburn-network-kargs.service... Aug 13 00:51:47.757542 systemd[1]: Starting ignition-setup.service... Aug 13 00:51:47.780653 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:47.780689 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:47.780699 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:47.789164 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:51:47.796571 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:51:47.801944 systemd[1]: Finished ignition-setup.service. Aug 13 00:51:47.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.802650 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:51:47.861134 systemd[1]: Finished afterburn-network-kargs.service. Aug 13 00:51:47.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.861746 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:51:47.909010 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:51:47.909853 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:47.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.908000 audit: BPF prog-id=9 op=LOAD Aug 13 00:51:47.924249 systemd-networkd[736]: lo: Link UP Aug 13 00:51:47.924254 systemd-networkd[736]: lo: Gained carrier Aug 13 00:51:47.924728 systemd-networkd[736]: Enumeration completed Aug 13 00:51:47.928795 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 00:51:47.928929 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 00:51:47.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.924879 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:47.925058 systemd[1]: Reached target network.target. Aug 13 00:51:47.925110 systemd-networkd[736]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Aug 13 00:51:47.925548 systemd[1]: Starting iscsiuio.service... Aug 13 00:51:47.929384 systemd-networkd[736]: ens192: Link UP Aug 13 00:51:47.929386 systemd-networkd[736]: ens192: Gained carrier Aug 13 00:51:47.929847 systemd[1]: Started iscsiuio.service. Aug 13 00:51:47.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.930823 systemd[1]: Starting iscsid.service... Aug 13 00:51:47.933325 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:47.933325 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:51:47.933325 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:51:47.933325 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:51:47.933325 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:47.933325 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:51:47.934460 systemd[1]: Started iscsid.service. Aug 13 00:51:47.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.935504 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:51:47.942137 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:51:47.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.942548 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:51:47.943373 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:47.943456 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:47.944654 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:51:47.949725 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:51:47.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.954324 ignition[608]: Ignition 2.14.0 Aug 13 00:51:47.954332 ignition[608]: Stage: fetch-offline Aug 13 00:51:47.954366 ignition[608]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:47.954381 ignition[608]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 00:51:47.958016 ignition[608]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 00:51:47.958096 ignition[608]: parsed url from cmdline: "" Aug 13 00:51:47.958099 ignition[608]: no config URL provided Aug 13 00:51:47.958102 ignition[608]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:47.958107 ignition[608]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:47.964571 ignition[608]: config successfully fetched Aug 13 00:51:47.964607 ignition[608]: parsing config with SHA512: d79cecf69ab39986cac841e057b97a243ab8006e19a13509c70fd9b7c4424c80fb9ffc05fd9bb625bfa1b531bbf2d77f8402f81ac142b0de90449927410bf5e4 Aug 13 00:51:47.968453 unknown[608]: fetched base config from "system" Aug 13 00:51:47.968461 unknown[608]: fetched user config from "vmware" Aug 13 00:51:47.968865 ignition[608]: fetch-offline: fetch-offline passed Aug 13 00:51:47.968946 ignition[608]: Ignition finished successfully Aug 13 00:51:47.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.969502 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:51:47.969661 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:51:47.970129 systemd[1]: Starting ignition-kargs.service... Aug 13 00:51:47.975375 ignition[756]: Ignition 2.14.0 Aug 13 00:51:47.975382 ignition[756]: Stage: kargs Aug 13 00:51:47.975442 ignition[756]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:47.975452 ignition[756]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 00:51:47.976732 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 00:51:47.978282 ignition[756]: kargs: kargs passed Aug 13 00:51:47.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.979066 systemd[1]: Finished ignition-kargs.service. Aug 13 00:51:47.978320 ignition[756]: Ignition finished successfully Aug 13 00:51:47.979723 systemd[1]: Starting ignition-disks.service... Aug 13 00:51:47.984314 ignition[762]: Ignition 2.14.0 Aug 13 00:51:47.984587 ignition[762]: Stage: disks Aug 13 00:51:47.984758 ignition[762]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:47.984908 ignition[762]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 00:51:47.986299 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 00:51:47.987901 ignition[762]: disks: disks passed Aug 13 00:51:47.988043 ignition[762]: Ignition finished successfully Aug 13 00:51:47.988672 systemd[1]: Finished ignition-disks.service. Aug 13 00:51:47.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.988859 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:51:47.988969 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:47.989131 systemd[1]: Reached target local-fs.target. Aug 13 00:51:47.989296 systemd[1]: Reached target sysinit.target. Aug 13 00:51:47.989450 systemd[1]: Reached target basic.target. Aug 13 00:51:47.990107 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:51:48.001450 systemd-fsck[770]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 00:51:48.002664 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:51:48.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:48.003378 systemd[1]: Mounting sysroot.mount... Aug 13 00:51:48.010932 systemd[1]: Mounted sysroot.mount. Aug 13 00:51:48.011081 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:51:48.011299 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:51:48.012312 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:51:48.012662 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:51:48.012684 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:51:48.012698 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:51:48.014292 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:51:48.015031 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:51:48.018016 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:51:48.023278 initrd-setup-root[788]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:51:48.025605 initrd-setup-root[796]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:51:48.027830 initrd-setup-root[804]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:51:48.055218 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:51:48.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:48.055784 systemd[1]: Starting ignition-mount.service... Aug 13 00:51:48.056236 systemd[1]: Starting sysroot-boot.service... Aug 13 00:51:48.060128 bash[821]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:51:48.065045 ignition[822]: INFO : Ignition 2.14.0 Aug 13 00:51:48.065289 ignition[822]: INFO : Stage: mount Aug 13 00:51:48.065454 ignition[822]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:48.065603 ignition[822]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 00:51:48.067066 ignition[822]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 00:51:48.068629 ignition[822]: INFO : mount: mount passed Aug 13 00:51:48.068774 ignition[822]: INFO : Ignition finished successfully Aug 13 00:51:48.069397 systemd[1]: Finished ignition-mount.service. Aug 13 00:51:48.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:48.082025 systemd[1]: Finished sysroot-boot.service. Aug 13 00:51:48.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:48.229171 systemd-resolved[219]: Detected conflict on linux IN A 139.178.70.105 Aug 13 00:51:48.229183 systemd-resolved[219]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Aug 13 00:51:48.714797 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:51:48.723168 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (831) Aug 13 00:51:48.725450 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:48.725466 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:48.725475 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:48.730160 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:51:48.731000 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:51:48.731569 systemd[1]: Starting ignition-files.service... Aug 13 00:51:48.740731 ignition[851]: INFO : Ignition 2.14.0 Aug 13 00:51:48.740731 ignition[851]: INFO : Stage: files Aug 13 00:51:48.741058 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:48.741058 ignition[851]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 00:51:48.742092 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 00:51:48.744606 ignition[851]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:51:48.745276 ignition[851]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:51:48.745276 ignition[851]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:51:48.747881 ignition[851]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:51:48.748048 ignition[851]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:51:48.748296 unknown[851]: wrote ssh authorized keys file for user: core Aug 13 00:51:48.748509 ignition[851]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:51:48.748817 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:51:48.748987 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:51:48.748987 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:51:48.748987 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:51:48.789258 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:51:49.182974 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:51:49.183232 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:49.183232 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:49.183232 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:49.183702 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:49.183702 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:49.183702 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:49.183702 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:49.183702 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:49.184463 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:49.184463 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:49.184463 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:49.184463 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:49.186231 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Aug 13 00:51:49.186231 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:51:49.188126 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2923978474" Aug 13 00:51:49.188351 ignition[851]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2923978474": device or resource busy Aug 13 00:51:49.188557 ignition[851]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2923978474", trying btrfs: device or resource busy Aug 13 00:51:49.188768 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2923978474" Aug 13 00:51:49.190858 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2923978474" Aug 13 00:51:49.191935 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2923978474" Aug 13 00:51:49.192762 systemd[1]: mnt-oem2923978474.mount: Deactivated successfully. Aug 13 00:51:49.193187 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2923978474" Aug 13 00:51:49.193395 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Aug 13 00:51:49.193601 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:49.193864 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:51:49.382397 systemd-networkd[736]: ens192: Gained IPv6LL Aug 13 00:51:49.691387 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Aug 13 00:51:49.975288 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:49.983689 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 00:51:49.983981 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 00:51:49.984362 ignition[851]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Aug 13 00:51:49.984503 ignition[851]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Aug 13 00:51:49.984647 ignition[851]: INFO : files: op(12): [started] processing unit "containerd.service" Aug 13 00:51:49.984814 ignition[851]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:51:49.985089 ignition[851]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:51:49.985309 ignition[851]: INFO : files: op(12): [finished] processing unit "containerd.service" Aug 13 00:51:49.985451 ignition[851]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Aug 13 00:51:49.985609 ignition[851]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:49.985841 ignition[851]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:49.986033 ignition[851]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Aug 13 00:51:49.986184 ignition[851]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Aug 13 00:51:49.986349 ignition[851]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:51:49.986588 ignition[851]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:51:49.986783 ignition[851]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Aug 13 00:51:49.986939 ignition[851]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:51:49.987094 ignition[851]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:51:50.172697 ignition[851]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:51:50.172992 ignition[851]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:51:50.173174 ignition[851]: INFO : files: op(1a): [started] setting preset to enabled for "vmtoolsd.service" Aug 13 00:51:50.173354 ignition[851]: INFO : files: op(1a): [finished] setting preset to enabled for "vmtoolsd.service" Aug 13 00:51:50.173514 ignition[851]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:50.173682 ignition[851]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:50.173937 ignition[851]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:50.174188 ignition[851]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:50.174368 ignition[851]: INFO : files: files passed Aug 13 00:51:50.174506 ignition[851]: INFO : Ignition finished successfully Aug 13 00:51:50.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.175263 systemd[1]: Finished ignition-files.service. Aug 13 00:51:50.179291 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 13 00:51:50.179312 kernel: audit: type=1130 audit(1755046310.174:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.176510 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:51:50.178505 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:51:50.178887 systemd[1]: Starting ignition-quench.service... Aug 13 00:51:50.181908 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:51:50.181961 systemd[1]: Finished ignition-quench.service. Aug 13 00:51:50.182934 initrd-setup-root-after-ignition[877]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:51:50.187900 kernel: audit: type=1130 audit(1755046310.181:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.187915 kernel: audit: type=1131 audit(1755046310.181:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.183409 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:51:50.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.188351 systemd[1]: Reached target ignition-complete.target. Aug 13 00:51:50.191190 kernel: audit: type=1130 audit(1755046310.187:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.191508 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:51:50.200057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:51:50.200304 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:51:50.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.200634 systemd[1]: Reached target initrd-fs.target. Aug 13 00:51:50.205455 kernel: audit: type=1130 audit(1755046310.199:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.205471 kernel: audit: type=1131 audit(1755046310.199:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.205516 systemd[1]: Reached target initrd.target. Aug 13 00:51:50.205661 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:51:50.206199 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:51:50.213039 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:51:50.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.216312 kernel: audit: type=1130 audit(1755046310.212:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.213848 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:51:50.219748 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:51:50.220049 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:51:50.220348 systemd[1]: Stopped target timers.target. Aug 13 00:51:50.220619 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:51:50.220828 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:51:50.221158 systemd[1]: Stopped target initrd.target. Aug 13 00:51:50.221412 systemd[1]: Stopped target basic.target. Aug 13 00:51:50.221680 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:51:50.221956 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:51:50.222225 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:51:50.222489 systemd[1]: Stopped target remote-fs.target. Aug 13 00:51:50.222743 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:51:50.223028 systemd[1]: Stopped target sysinit.target. Aug 13 00:51:50.223306 systemd[1]: Stopped target local-fs.target. Aug 13 00:51:50.223564 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:51:50.223825 systemd[1]: Stopped target swap.target. Aug 13 00:51:50.224069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:51:50.224285 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:51:50.224584 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:51:50.224826 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:51:50.225025 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:51:50.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.228165 kernel: audit: type=1131 audit(1755046310.219:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.228185 kernel: audit: type=1131 audit(1755046310.223:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.230073 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:51:50.230168 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:51:50.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.232828 systemd[1]: Stopped target paths.target. Aug 13 00:51:50.233068 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:51:50.233208 kernel: audit: type=1131 audit(1755046310.228:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.235179 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:51:50.235472 systemd[1]: Stopped target slices.target. Aug 13 00:51:50.235739 systemd[1]: Stopped target sockets.target. Aug 13 00:51:50.235993 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:51:50.236198 systemd[1]: Closed iscsid.socket. Aug 13 00:51:50.236603 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:51:50.236789 systemd[1]: Closed iscsiuio.socket. Aug 13 00:51:50.237056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:51:50.237300 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:51:50.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.237642 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:51:50.237846 systemd[1]: Stopped ignition-files.service. Aug 13 00:51:50.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.238764 systemd[1]: Stopping ignition-mount.service... Aug 13 00:51:50.239035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:51:50.239261 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:51:50.239956 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:51:50.240183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:51:50.240404 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:51:50.240713 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:51:50.240931 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:51:50.242960 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:51:50.243100 ignition[890]: INFO : Ignition 2.14.0 Aug 13 00:51:50.243100 ignition[890]: INFO : Stage: umount Aug 13 00:51:50.243100 ignition[890]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:50.243100 ignition[890]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 00:51:50.243764 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:51:50.244335 ignition[890]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 00:51:50.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.249487 ignition[890]: INFO : umount: umount passed Aug 13 00:51:50.249638 ignition[890]: INFO : Ignition finished successfully Aug 13 00:51:50.249984 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:51:50.250042 systemd[1]: Stopped ignition-mount.service. Aug 13 00:51:50.250316 systemd[1]: Stopped target network.target. Aug 13 00:51:50.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.250423 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:51:50.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.250448 systemd[1]: Stopped ignition-disks.service. Aug 13 00:51:50.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.250583 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:51:50.250604 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:51:50.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.250753 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:51:50.250773 systemd[1]: Stopped ignition-setup.service. Aug 13 00:51:50.250968 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:51:50.251124 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:51:50.253818 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:51:50.253871 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:51:50.254064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:51:50.254082 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:51:50.254618 systemd[1]: Stopping network-cleanup.service... Aug 13 00:51:50.254715 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:51:50.254744 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:51:50.254880 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Aug 13 00:51:50.254902 systemd[1]: Stopped afterburn-network-kargs.service. Aug 13 00:51:50.255015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:50.255038 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:50.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.256944 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:51:50.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.259245 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:51:50.259000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:51:50.262055 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:51:50.263577 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:51:50.263635 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:51:50.264014 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:51:50.264074 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:51:50.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.266586 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:51:50.266799 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:51:50.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.266000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:51:50.267405 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:51:50.267584 systemd[1]: Stopped network-cleanup.service. Aug 13 00:51:50.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.267905 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:51:50.268064 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:51:50.268298 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:51:50.268450 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:51:50.268657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:51:50.268806 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:51:50.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.269065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:51:50.269327 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:51:50.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.269584 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:51:50.269732 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:51:50.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.270368 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:51:50.270648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:51:50.270807 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:51:50.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.273666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:51:50.273863 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:51:50.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.333796 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:51:50.333856 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:51:50.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.334135 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:51:50.334256 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:51:50.334283 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:51:50.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:50.334868 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:51:50.351700 systemd[1]: Switching root. Aug 13 00:51:50.352000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:51:50.352000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:51:50.352000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:51:50.353000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:51:50.353000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:51:50.365343 systemd-journald[217]: Journal stopped Aug 13 00:51:53.127990 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 13 00:51:53.128015 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:51:53.128024 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:51:53.128031 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:51:53.128036 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:51:53.128049 kernel: SELinux: policy capability open_perms=1 Aug 13 00:51:53.128058 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:51:53.128064 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:51:53.128070 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:51:53.128076 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:51:53.128081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:51:53.128087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:51:53.128101 systemd[1]: Successfully loaded SELinux policy in 37.868ms. Aug 13 00:51:53.128111 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.942ms. Aug 13 00:51:53.128120 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:53.128126 systemd[1]: Detected virtualization vmware. Aug 13 00:51:53.128139 systemd[1]: Detected architecture x86-64. Aug 13 00:51:53.128146 systemd[1]: Detected first boot. Aug 13 00:51:53.128159 systemd[1]: Initializing machine ID from random generator. Aug 13 00:51:53.128166 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:51:53.128173 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:51:53.128180 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:53.128187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:53.128195 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:53.128209 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:51:53.128218 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:51:53.128225 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:51:53.128232 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:51:53.128239 systemd[1]: Created slice system-getty.slice. Aug 13 00:51:53.128246 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:51:53.128252 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:51:53.128268 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:51:53.128276 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:51:53.128283 systemd[1]: Created slice user.slice. Aug 13 00:51:53.128290 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:53.128296 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:51:53.128302 systemd[1]: Set up automount boot.automount. Aug 13 00:51:53.128309 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:51:53.128315 systemd[1]: Reached target integritysetup.target. Aug 13 00:51:53.128322 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:53.129521 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:53.129542 systemd[1]: Reached target slices.target. Aug 13 00:51:53.129550 systemd[1]: Reached target swap.target. Aug 13 00:51:53.129557 systemd[1]: Reached target torcx.target. Aug 13 00:51:53.129564 systemd[1]: Reached target veritysetup.target. Aug 13 00:51:53.129571 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:51:53.129578 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:51:53.129585 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:51:53.129598 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:51:53.129606 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:51:53.129613 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:53.129620 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:53.129626 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:53.129634 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:51:53.129646 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:51:53.129653 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:51:53.129660 systemd[1]: Mounting media.mount... Aug 13 00:51:53.129668 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:53.129675 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:51:53.129683 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:51:53.129690 systemd[1]: Mounting tmp.mount... Aug 13 00:51:53.129702 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:51:53.129710 systemd[1]: Starting ignition-delete-config.service... Aug 13 00:51:53.129717 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:53.129725 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:51:53.129731 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:53.129738 systemd[1]: Starting modprobe@drm.service... Aug 13 00:51:53.129746 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:53.129753 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:51:53.129760 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:53.129772 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:51:53.129780 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:51:53.129787 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:51:53.129794 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:53.129801 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:53.129809 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:51:53.129816 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:51:53.129823 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:53.129831 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:53.129843 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:51:53.129850 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:51:53.129858 systemd[1]: Mounted media.mount. Aug 13 00:51:53.129865 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:51:53.129872 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:51:53.129879 systemd[1]: Mounted tmp.mount. Aug 13 00:51:53.129886 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:53.133179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:53.133195 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:53.133215 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:51:53.133224 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:51:53.133231 systemd[1]: Reached target network-pre.target. Aug 13 00:51:53.133239 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:51:53.133246 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:51:53.133254 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:51:53.133263 systemd-journald[1032]: Journal started Aug 13 00:51:53.133304 systemd-journald[1032]: Runtime Journal (/run/log/journal/e4d2c7f645cb4e7e9ff0318aba5de42e) is 4.8M, max 38.8M, 34.0M free. Aug 13 00:51:53.049000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:51:53.049000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:51:53.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.125000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:51:53.125000 audit[1032]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc18b50db0 a2=4000 a3=7ffc18b50e4c items=0 ppid=1 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:53.125000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:51:53.133795 jq[1018]: true Aug 13 00:51:53.151458 systemd[1]: Started systemd-journald.service. Aug 13 00:51:53.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.152567 systemd-journald[1032]: Time spent on flushing to /var/log/journal/e4d2c7f645cb4e7e9ff0318aba5de42e is 50.853ms for 1898 entries. Aug 13 00:51:53.152567 systemd-journald[1032]: System Journal (/var/log/journal/e4d2c7f645cb4e7e9ff0318aba5de42e) is 8.0M, max 584.8M, 576.8M free. Aug 13 00:51:53.344731 systemd-journald[1032]: Received client request to flush runtime journal. Aug 13 00:51:53.344805 kernel: fuse: init (API version 7.34) Aug 13 00:51:53.344828 kernel: loop: module loaded Aug 13 00:51:53.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.136896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:53.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.136992 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:53.138243 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:51:53.138359 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:53.346432 jq[1045]: true Aug 13 00:51:53.142812 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:51:53.142915 systemd[1]: Finished modprobe@drm.service. Aug 13 00:51:53.146960 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:51:53.147120 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:51:53.152321 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:53.153369 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:53.169172 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:53.193832 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:51:53.193956 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:51:53.195808 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:51:53.348025 udevadm[1106]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:51:53.198163 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:51:53.204503 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:51:53.204624 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:51:53.205693 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:51:53.208207 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:51:53.217427 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:53.217531 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:53.217711 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:53.232297 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:51:53.233412 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:51:53.291187 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:53.292248 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:51:53.345775 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:51:53.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.371186 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:51:53.372481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:51:53.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.423279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:51:53.517346 ignition[1049]: Ignition 2.14.0 Aug 13 00:51:53.517582 ignition[1049]: deleting config from guestinfo properties Aug 13 00:51:53.524318 ignition[1049]: Successfully deleted config Aug 13 00:51:53.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.525245 systemd[1]: Finished ignition-delete-config.service. Aug 13 00:51:53.856720 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:51:53.857758 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:53.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:53.871141 systemd-udevd[1114]: Using default interface naming scheme 'v252'. Aug 13 00:51:54.503677 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:54.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.505260 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:54.528743 systemd[1]: Found device dev-ttyS0.device. Aug 13 00:51:54.562187 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:51:54.575265 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:51:54.584546 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:51:54.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.624003 systemd[1]: Started systemd-userdbd.service. Aug 13 00:51:54.624000 audit[1115]: AVC avc: denied { confidentiality } for pid=1115 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:51:54.624000 audit[1115]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562b865cde70 a1=338ac a2=7fe068af3bc5 a3=5 items=110 ppid=1114 pid=1115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:54.624000 audit: CWD cwd="/" Aug 13 00:51:54.624000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=1 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=2 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=3 name=(null) inode=25019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=4 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=5 name=(null) inode=25020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=6 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=7 name=(null) inode=25021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=8 name=(null) inode=25021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=9 name=(null) inode=25022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=10 name=(null) inode=25021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=11 name=(null) inode=25023 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=12 name=(null) inode=25021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=13 name=(null) inode=25024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=14 name=(null) inode=25021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=15 name=(null) inode=25025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=16 name=(null) inode=25021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=17 name=(null) inode=25026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=18 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=19 name=(null) inode=25027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=20 name=(null) inode=25027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=21 name=(null) inode=25028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=22 name=(null) inode=25027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=23 name=(null) inode=25029 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=24 name=(null) inode=25027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=25 name=(null) inode=25030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=26 name=(null) inode=25027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=27 name=(null) inode=25031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=28 name=(null) inode=25027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=29 name=(null) inode=25032 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=30 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=31 name=(null) inode=25033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=32 name=(null) inode=25033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=33 name=(null) inode=25034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=34 name=(null) inode=25033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=35 name=(null) inode=25035 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=36 name=(null) inode=25033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=37 name=(null) inode=25036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=38 name=(null) inode=25033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=39 name=(null) inode=25037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=40 name=(null) inode=25033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=41 name=(null) inode=25038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=42 name=(null) inode=25018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=43 name=(null) inode=25039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=44 name=(null) inode=25039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=45 name=(null) inode=25040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=46 name=(null) inode=25039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=47 name=(null) inode=25041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=48 name=(null) inode=25039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=49 name=(null) inode=25042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=50 name=(null) inode=25039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=51 name=(null) inode=25043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=52 name=(null) inode=25039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=53 name=(null) inode=25044 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=55 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=56 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=57 name=(null) inode=25046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=58 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=59 name=(null) inode=25047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=60 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=61 name=(null) inode=25048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=62 name=(null) inode=25048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=63 name=(null) inode=25049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=64 name=(null) inode=25048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=65 name=(null) inode=25050 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=66 name=(null) inode=25048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=67 name=(null) inode=25051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=68 name=(null) inode=25048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=69 name=(null) inode=25052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=70 name=(null) inode=25048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=71 name=(null) inode=25053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=72 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=73 name=(null) inode=25054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=74 name=(null) inode=25054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=75 name=(null) inode=25055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=76 name=(null) inode=25054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=77 name=(null) inode=25056 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=78 name=(null) inode=25054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=79 name=(null) inode=25057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=80 name=(null) inode=25054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=81 name=(null) inode=25058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=82 name=(null) inode=25054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=83 name=(null) inode=25059 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=84 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=85 name=(null) inode=25060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=86 name=(null) inode=25060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=87 name=(null) inode=25061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=88 name=(null) inode=25060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=89 name=(null) inode=25062 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=90 name=(null) inode=25060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=91 name=(null) inode=25063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=92 name=(null) inode=25060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=93 name=(null) inode=25064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=94 name=(null) inode=25060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=95 name=(null) inode=25065 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=96 name=(null) inode=25045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=97 name=(null) inode=25066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=98 name=(null) inode=25066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=99 name=(null) inode=25067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=100 name=(null) inode=25066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=101 name=(null) inode=25068 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=102 name=(null) inode=25066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=103 name=(null) inode=25069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=104 name=(null) inode=25066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=105 name=(null) inode=25070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=106 name=(null) inode=25066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=107 name=(null) inode=25071 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PATH item=109 name=(null) inode=25072 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:54.624000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:51:54.644168 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Aug 13 00:51:54.646849 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Aug 13 00:51:54.647654 kernel: Guest personality initialized and is active Aug 13 00:51:54.649166 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:51:54.649238 kernel: Initialized host personality Aug 13 00:51:54.651164 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Aug 13 00:51:54.677644 (udev-worker)[1129]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Aug 13 00:51:54.693180 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:51:54.701749 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:51:54.806310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:54.809447 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:51:54.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.810562 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:51:54.815687 systemd-networkd[1119]: lo: Link UP Aug 13 00:51:54.815894 systemd-networkd[1119]: lo: Gained carrier Aug 13 00:51:54.816287 systemd-networkd[1119]: Enumeration completed Aug 13 00:51:54.816407 systemd-networkd[1119]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Aug 13 00:51:54.816414 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:54.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.820707 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 00:51:54.820862 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 00:51:54.821157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Aug 13 00:51:54.822309 systemd-networkd[1119]: ens192: Link UP Aug 13 00:51:54.822490 systemd-networkd[1119]: ens192: Gained carrier Aug 13 00:51:54.834987 lvm[1148]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:51:54.855803 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:51:54.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.855987 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:54.857035 systemd[1]: Starting lvm2-activation.service... Aug 13 00:51:54.860025 lvm[1150]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:51:54.891923 systemd[1]: Finished lvm2-activation.service. Aug 13 00:51:54.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.892111 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:54.892219 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:51:54.892234 systemd[1]: Reached target local-fs.target. Aug 13 00:51:54.892325 systemd[1]: Reached target machines.target. Aug 13 00:51:54.893440 systemd[1]: Starting ldconfig.service... Aug 13 00:51:54.894259 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:54.894299 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:54.895397 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:51:54.896331 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:51:54.897430 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:51:54.898523 systemd[1]: Starting systemd-sysext.service... Aug 13 00:51:54.902855 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1153 (bootctl) Aug 13 00:51:54.903673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:51:54.914493 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:51:54.916900 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:54.917025 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:51:54.929447 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:51:54.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:54.936171 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:51:55.204872 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:51:55.205353 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:51:55.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.206255 kernel: kauditd_printk_skb: 198 callbacks suppressed Aug 13 00:51:55.206292 kernel: audit: type=1130 audit(1755046315.204:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.233173 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:51:55.349477 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:51:55.364411 systemd-fsck[1166]: fsck.fat 4.2 (2021-01-31) Aug 13 00:51:55.364411 systemd-fsck[1166]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 00:51:55.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.365847 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:51:55.367201 systemd[1]: Mounting boot.mount... Aug 13 00:51:55.371327 kernel: audit: type=1130 audit(1755046315.364:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.384246 systemd[1]: Mounted boot.mount. Aug 13 00:51:55.386233 (sd-sysext)[1170]: Using extensions 'kubernetes'. Aug 13 00:51:55.386484 (sd-sysext)[1170]: Merged extensions into '/usr'. Aug 13 00:51:55.398463 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:55.399572 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:51:55.400396 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:55.401095 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:55.401797 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:55.401947 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.402024 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:55.402108 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:55.402623 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:51:55.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.406351 kernel: audit: type=1130 audit(1755046315.402:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.406350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:55.406435 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:55.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.409497 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:55.411170 kernel: audit: type=1130 audit(1755046315.405:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.414558 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:55.414655 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:55.415184 kernel: audit: type=1131 audit(1755046315.408:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.418338 kernel: audit: type=1130 audit(1755046315.414:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.419813 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:51:55.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.422700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:55.422802 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:55.423086 systemd[1]: Finished systemd-sysext.service. Aug 13 00:51:55.423186 kernel: audit: type=1131 audit(1755046315.414:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.424765 systemd[1]: Starting ensure-sysext.service... Aug 13 00:51:55.427131 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.428057 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:51:55.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.431183 kernel: audit: type=1130 audit(1755046315.421:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.432682 systemd[1]: Reloading. Aug 13 00:51:55.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.441203 kernel: audit: type=1131 audit(1755046315.421:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.441268 kernel: audit: type=1130 audit(1755046315.422:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.448609 systemd-tmpfiles[1189]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:51:55.450325 systemd-tmpfiles[1189]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:51:55.452812 systemd-tmpfiles[1189]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:51:55.481712 /usr/lib/systemd/system-generators/torcx-generator[1208]: time="2025-08-13T00:51:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:55.481728 /usr/lib/systemd/system-generators/torcx-generator[1208]: time="2025-08-13T00:51:55Z" level=info msg="torcx already run" Aug 13 00:51:55.569291 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:55.569313 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:55.582375 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:55.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.625335 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:51:55.627736 systemd[1]: Starting audit-rules.service... Aug 13 00:51:55.629486 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:51:55.632800 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:51:55.634220 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:55.636106 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:51:55.641058 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:51:55.643000 audit[1287]: SYSTEM_BOOT pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.652375 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:55.653406 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:55.654994 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:55.655323 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.655407 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:55.656017 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:51:55.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.657201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:55.657293 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:55.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.661295 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:55.661397 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:55.662294 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.665446 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:55.668136 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:55.668292 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.668372 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:55.671504 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:51:55.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.671947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:55.672041 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:55.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.673810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:55.673936 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:55.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.676062 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:55.676999 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:55.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.677721 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:55.677786 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.677831 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:55.680857 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:55.683227 systemd[1]: Starting modprobe@drm.service... Aug 13 00:51:55.684111 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:55.686256 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:55.686606 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.686703 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:55.689246 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:51:55.689829 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:55.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.692127 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:51:55.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.693683 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:51:55.693775 systemd[1]: Finished modprobe@drm.service. Aug 13 00:51:55.694161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:55.694250 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:55.694700 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:55.695933 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:55.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.696027 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:55.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.701518 systemd[1]: Finished ensure-sysext.service. Aug 13 00:51:55.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.702774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:55.702875 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:55.703041 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:55.741000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:51:55.741000 audit[1321]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcabf85a30 a2=420 a3=0 items=0 ppid=1276 pid=1321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:55.741000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:51:55.742457 augenrules[1321]: No rules Aug 13 00:51:55.743071 systemd[1]: Finished audit-rules.service. Aug 13 00:51:55.780104 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:51:55.780471 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:51:55.780674 systemd[1]: Reached target time-set.target. Aug 13 00:51:55.806198 systemd-resolved[1280]: Positive Trust Anchors: Aug 13 00:51:55.806207 systemd-resolved[1280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:55.806334 systemd-resolved[1280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:55.813040 systemd[1]: Finished ldconfig.service. Aug 13 00:51:55.814213 systemd[1]: Starting systemd-update-done.service... Aug 13 00:51:55.827396 systemd[1]: Finished systemd-update-done.service. Aug 13 00:53:25.338361 systemd-timesyncd[1281]: Contacted time server 74.6.168.73:123 (0.flatcar.pool.ntp.org). Aug 13 00:53:25.338399 systemd-timesyncd[1281]: Initial clock synchronization to Wed 2025-08-13 00:53:25.338290 UTC. Aug 13 00:53:25.370156 systemd-resolved[1280]: Defaulting to hostname 'linux'. Aug 13 00:53:25.371535 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:25.371710 systemd[1]: Reached target network.target. Aug 13 00:53:25.371800 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:25.371893 systemd[1]: Reached target sysinit.target. Aug 13 00:53:25.372035 systemd[1]: Started motdgen.path. Aug 13 00:53:25.372149 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:53:25.372356 systemd[1]: Started logrotate.timer. Aug 13 00:53:25.372509 systemd[1]: Started mdadm.timer. Aug 13 00:53:25.372611 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:53:25.372716 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:53:25.372739 systemd[1]: Reached target paths.target. Aug 13 00:53:25.372826 systemd[1]: Reached target timers.target. Aug 13 00:53:25.373121 systemd[1]: Listening on dbus.socket. Aug 13 00:53:25.374192 systemd[1]: Starting docker.socket... Aug 13 00:53:25.377314 systemd[1]: Listening on sshd.socket. Aug 13 00:53:25.377460 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:25.377747 systemd[1]: Listening on docker.socket. Aug 13 00:53:25.377862 systemd[1]: Reached target sockets.target. Aug 13 00:53:25.377965 systemd[1]: Reached target basic.target. Aug 13 00:53:25.378127 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:53:25.378155 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:25.378170 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:25.379103 systemd[1]: Starting containerd.service... Aug 13 00:53:25.380079 systemd[1]: Starting dbus.service... Aug 13 00:53:25.380979 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:53:25.381879 systemd[1]: Starting extend-filesystems.service... Aug 13 00:53:25.382025 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:53:25.382910 systemd[1]: Starting motdgen.service... Aug 13 00:53:25.386199 systemd[1]: Starting prepare-helm.service... Aug 13 00:53:25.387317 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:53:25.388032 jq[1336]: false Aug 13 00:53:25.388286 systemd[1]: Starting sshd-keygen.service... Aug 13 00:53:25.390814 systemd[1]: Starting systemd-logind.service... Aug 13 00:53:25.393217 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:25.393259 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:53:25.396846 systemd[1]: Starting update-engine.service... Aug 13 00:53:25.398491 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:53:25.399854 systemd[1]: Starting vmtoolsd.service... Aug 13 00:53:25.402828 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:53:25.402975 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:53:25.404308 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:53:25.404446 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:53:25.411195 jq[1350]: true Aug 13 00:53:25.412104 systemd[1]: Started vmtoolsd.service. Aug 13 00:53:25.416105 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:53:25.416253 systemd[1]: Finished motdgen.service. Aug 13 00:53:25.423261 extend-filesystems[1337]: Found loop1 Aug 13 00:53:25.423663 extend-filesystems[1337]: Found sda Aug 13 00:53:25.423777 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:25.423796 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:25.426597 extend-filesystems[1337]: Found sda1 Aug 13 00:53:25.428052 extend-filesystems[1337]: Found sda2 Aug 13 00:53:25.428238 extend-filesystems[1337]: Found sda3 Aug 13 00:53:25.428499 extend-filesystems[1337]: Found usr Aug 13 00:53:25.429042 jq[1368]: true Aug 13 00:53:25.429267 extend-filesystems[1337]: Found sda4 Aug 13 00:53:25.429416 extend-filesystems[1337]: Found sda6 Aug 13 00:53:25.432542 extend-filesystems[1337]: Found sda7 Aug 13 00:53:25.432542 extend-filesystems[1337]: Found sda9 Aug 13 00:53:25.432542 extend-filesystems[1337]: Checking size of /dev/sda9 Aug 13 00:53:25.448967 tar[1356]: linux-amd64/helm Aug 13 00:53:25.460073 extend-filesystems[1337]: Old size kept for /dev/sda9 Aug 13 00:53:25.463476 extend-filesystems[1337]: Found sr0 Aug 13 00:53:25.464332 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:53:25.464509 systemd[1]: Finished extend-filesystems.service. Aug 13 00:53:25.475506 dbus-daemon[1334]: [system] SELinux support is enabled Aug 13 00:53:25.475672 systemd[1]: Started dbus.service. Aug 13 00:53:25.477183 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:53:25.477199 systemd[1]: Reached target system-config.target. Aug 13 00:53:25.477321 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:53:25.477330 systemd[1]: Reached target user-config.target. Aug 13 00:53:25.480711 systemd-networkd[1119]: ens192: Gained IPv6LL Aug 13 00:53:25.481882 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:53:25.482063 systemd[1]: Reached target network-online.target. Aug 13 00:53:25.484398 systemd[1]: Starting kubelet.service... Aug 13 00:53:25.508237 bash[1396]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:53:25.509080 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:53:25.517182 systemd-logind[1347]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:53:25.517196 systemd-logind[1347]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:53:25.517373 systemd-logind[1347]: New seat seat0. Aug 13 00:53:25.518453 systemd[1]: Started systemd-logind.service. Aug 13 00:53:25.521571 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:53:25.521848 update_engine[1349]: I0813 00:53:25.521338 1349 main.cc:92] Flatcar Update Engine starting Aug 13 00:53:25.535030 systemd[1]: Started update-engine.service. Aug 13 00:53:25.535198 update_engine[1349]: I0813 00:53:25.535176 1349 update_check_scheduler.cc:74] Next update check in 5m21s Aug 13 00:53:25.536893 systemd[1]: Started locksmithd.service. Aug 13 00:53:25.544350 env[1359]: time="2025-08-13T00:53:25.544319866Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:53:25.599845 env[1359]: time="2025-08-13T00:53:25.599322393Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:53:25.599845 env[1359]: time="2025-08-13T00:53:25.599420978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.602927328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.602957766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603196680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603208193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603216827Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603222321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603273642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603424898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603514695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:25.603671 env[1359]: time="2025-08-13T00:53:25.603524348Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:53:25.603904 env[1359]: time="2025-08-13T00:53:25.603552472Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:53:25.603904 env[1359]: time="2025-08-13T00:53:25.603569426Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:53:25.608020 env[1359]: time="2025-08-13T00:53:25.607911250Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:53:25.608020 env[1359]: time="2025-08-13T00:53:25.607935564Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:53:25.608020 env[1359]: time="2025-08-13T00:53:25.607944296Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:53:25.608020 env[1359]: time="2025-08-13T00:53:25.607968393Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.608020 env[1359]: time="2025-08-13T00:53:25.607979866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.608020 env[1359]: time="2025-08-13T00:53:25.607991216Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608186213Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608201902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608210007Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608217652Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608225785Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608239092Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608309792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608365553Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608550729Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608594044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608606681Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608632552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608640479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609430 env[1359]: time="2025-08-13T00:53:25.608647956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608654139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608660820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608669220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608676836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608683079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608690922Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608759787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608769217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608775952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608782152Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608791260Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608798359Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608809562Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:53:25.609679 env[1359]: time="2025-08-13T00:53:25.608832285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.608968234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609007584Z" level=info msg="Connect containerd service" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609033819Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609337601Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609713534Z" level=info msg="Start subscribing containerd event" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609756768Z" level=info msg="Start recovering state" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609801348Z" level=info msg="Start event monitor" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609814278Z" level=info msg="Start snapshots syncer" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609819395Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:53:25.609910 env[1359]: time="2025-08-13T00:53:25.609823591Z" level=info msg="Start streaming server" Aug 13 00:53:25.612210 env[1359]: time="2025-08-13T00:53:25.610047807Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:53:25.612210 env[1359]: time="2025-08-13T00:53:25.610086750Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:53:25.612210 env[1359]: time="2025-08-13T00:53:25.610127812Z" level=info msg="containerd successfully booted in 0.066795s" Aug 13 00:53:25.610203 systemd[1]: Started containerd.service. Aug 13 00:53:25.994146 tar[1356]: linux-amd64/LICENSE Aug 13 00:53:25.995296 tar[1356]: linux-amd64/README.md Aug 13 00:53:25.999702 systemd[1]: Finished prepare-helm.service. Aug 13 00:53:26.025278 locksmithd[1407]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:53:26.422024 sshd_keygen[1364]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:53:26.437988 systemd[1]: Finished sshd-keygen.service. Aug 13 00:53:26.439269 systemd[1]: Starting issuegen.service... Aug 13 00:53:26.443281 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:53:26.443409 systemd[1]: Finished issuegen.service. Aug 13 00:53:26.444623 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:53:26.466608 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:53:26.467848 systemd[1]: Started getty@tty1.service. Aug 13 00:53:26.468998 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:53:26.469260 systemd[1]: Reached target getty.target. Aug 13 00:53:27.404265 systemd[1]: Started kubelet.service. Aug 13 00:53:27.404606 systemd[1]: Reached target multi-user.target. Aug 13 00:53:27.405655 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:53:27.410613 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:53:27.410742 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:53:27.413116 systemd[1]: Startup finished in 6.038s (kernel) + 7.221s (userspace) = 13.259s. Aug 13 00:53:27.438276 login[1482]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:53:27.438554 login[1481]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:53:27.479796 systemd[1]: Created slice user-500.slice. Aug 13 00:53:27.480659 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:53:27.484415 systemd-logind[1347]: New session 1 of user core. Aug 13 00:53:27.488113 systemd-logind[1347]: New session 2 of user core. Aug 13 00:53:27.492169 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:53:27.493164 systemd[1]: Starting user@500.service... Aug 13 00:53:27.497049 (systemd)[1492]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:27.547907 systemd[1492]: Queued start job for default target default.target. Aug 13 00:53:27.548085 systemd[1492]: Reached target paths.target. Aug 13 00:53:27.548102 systemd[1492]: Reached target sockets.target. Aug 13 00:53:27.548115 systemd[1492]: Reached target timers.target. Aug 13 00:53:27.548141 systemd[1492]: Reached target basic.target. Aug 13 00:53:27.548173 systemd[1492]: Reached target default.target. Aug 13 00:53:27.548194 systemd[1492]: Startup finished in 46ms. Aug 13 00:53:27.548307 systemd[1]: Started user@500.service. Aug 13 00:53:27.549042 systemd[1]: Started session-1.scope. Aug 13 00:53:27.549532 systemd[1]: Started session-2.scope. Aug 13 00:53:28.014016 kubelet[1488]: E0813 00:53:28.013970 1488 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:28.015350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:28.015455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:38.266189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:53:38.266343 systemd[1]: Stopped kubelet.service. Aug 13 00:53:38.267701 systemd[1]: Starting kubelet.service... Aug 13 00:53:38.709345 systemd[1]: Started kubelet.service. Aug 13 00:53:38.745212 kubelet[1530]: E0813 00:53:38.745177 1530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:38.747760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:38.747887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:48.998636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:53:48.998794 systemd[1]: Stopped kubelet.service. Aug 13 00:53:49.000146 systemd[1]: Starting kubelet.service... Aug 13 00:53:49.336745 systemd[1]: Started kubelet.service. Aug 13 00:53:49.398006 kubelet[1545]: E0813 00:53:49.397965 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:49.399346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:49.399477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:55.611239 systemd[1]: Created slice system-sshd.slice. Aug 13 00:53:55.611985 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:42776.service. Aug 13 00:53:55.654631 sshd[1552]: Accepted publickey for core from 139.178.68.195 port 42776 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:55.655430 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.658014 systemd-logind[1347]: New session 3 of user core. Aug 13 00:53:55.658351 systemd[1]: Started session-3.scope. Aug 13 00:53:55.705186 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:42782.service. Aug 13 00:53:55.746709 sshd[1557]: Accepted publickey for core from 139.178.68.195 port 42782 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:55.747782 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.750713 systemd-logind[1347]: New session 4 of user core. Aug 13 00:53:55.751026 systemd[1]: Started session-4.scope. Aug 13 00:53:55.802073 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:42792.service. Aug 13 00:53:55.802246 sshd[1557]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:55.803907 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:42782.service: Deactivated successfully. Aug 13 00:53:55.804800 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:53:55.805028 systemd-logind[1347]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:53:55.805490 systemd-logind[1347]: Removed session 4. Aug 13 00:53:55.835919 sshd[1562]: Accepted publickey for core from 139.178.68.195 port 42792 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:55.836896 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.840398 systemd[1]: Started session-5.scope. Aug 13 00:53:55.841163 systemd-logind[1347]: New session 5 of user core. Aug 13 00:53:55.888495 sshd[1562]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:55.890129 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:42802.service. Aug 13 00:53:55.894160 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:42792.service: Deactivated successfully. Aug 13 00:53:55.894603 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:53:55.895311 systemd-logind[1347]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:53:55.895895 systemd-logind[1347]: Removed session 5. Aug 13 00:53:55.922733 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 42802 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:55.923584 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.926309 systemd[1]: Started session-6.scope. Aug 13 00:53:55.926578 systemd-logind[1347]: New session 6 of user core. Aug 13 00:53:55.977659 sshd[1569]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:55.979344 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:42812.service. Aug 13 00:53:55.981007 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:42802.service: Deactivated successfully. Aug 13 00:53:55.981741 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:53:55.981986 systemd-logind[1347]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:53:55.982490 systemd-logind[1347]: Removed session 6. Aug 13 00:53:56.019070 sshd[1576]: Accepted publickey for core from 139.178.68.195 port 42812 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:56.019819 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:56.022614 systemd[1]: Started session-7.scope. Aug 13 00:53:56.023205 systemd-logind[1347]: New session 7 of user core. Aug 13 00:53:56.087605 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:53:56.087751 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:56.093720 dbus-daemon[1334]: \xd0]n҅U: received setenforce notice (enforcing=-482366160) Aug 13 00:53:56.094722 sudo[1582]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:56.096637 sshd[1576]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:56.098266 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:42814.service. Aug 13 00:53:56.101854 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:42812.service: Deactivated successfully. Aug 13 00:53:56.103180 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:53:56.103645 systemd-logind[1347]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:53:56.104535 systemd-logind[1347]: Removed session 7. Aug 13 00:53:56.133392 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 42814 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:56.134500 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:56.137337 systemd[1]: Started session-8.scope. Aug 13 00:53:56.138113 systemd-logind[1347]: New session 8 of user core. Aug 13 00:53:56.188419 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:53:56.188566 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:56.191025 sudo[1591]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:56.194680 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:53:56.194849 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:56.201423 systemd[1]: Stopping audit-rules.service... Aug 13 00:53:56.202000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:53:56.205261 kernel: kauditd_printk_skb: 27 callbacks suppressed Aug 13 00:53:56.205308 kernel: audit: type=1305 audit(1755046436.202:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:53:56.205334 kernel: audit: type=1300 audit(1755046436.202:156): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf6d699c0 a2=420 a3=0 items=0 ppid=1 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.202000 audit[1594]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf6d699c0 a2=420 a3=0 items=0 ppid=1 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.205506 auditctl[1594]: No rules Aug 13 00:53:56.205810 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:53:56.205958 systemd[1]: Stopped audit-rules.service. Aug 13 00:53:56.207296 systemd[1]: Starting audit-rules.service... Aug 13 00:53:56.210393 kernel: audit: type=1327 audit(1755046436.202:156): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:53:56.202000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:53:56.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.213573 kernel: audit: type=1131 audit(1755046436.205:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.222908 augenrules[1612]: No rules Aug 13 00:53:56.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.223490 systemd[1]: Finished audit-rules.service. Aug 13 00:53:56.224333 sudo[1590]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:56.228586 kernel: audit: type=1130 audit(1755046436.223:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.230397 sshd[1584]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:56.230899 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:42824.service. Aug 13 00:53:56.232078 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:42814.service: Deactivated successfully. Aug 13 00:53:56.223000 audit[1590]: USER_END pid=1590 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.235765 kernel: audit: type=1106 audit(1755046436.223:159): pid=1590 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.235489 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:53:56.235517 systemd-logind[1347]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:53:56.237690 systemd-logind[1347]: Removed session 8. Aug 13 00:53:56.223000 audit[1590]: CRED_DISP pid=1590 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:42824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.244847 kernel: audit: type=1104 audit(1755046436.223:160): pid=1590 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.244897 kernel: audit: type=1130 audit(1755046436.230:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:42824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.244917 kernel: audit: type=1106 audit(1755046436.230:162): pid=1584 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.230000 audit[1584]: USER_END pid=1584 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.230000 audit[1584]: CRED_DISP pid=1584 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.251489 kernel: audit: type=1104 audit(1755046436.230:163): pid=1584 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.105:22-139.178.68.195:42814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.270000 audit[1617]: USER_ACCT pid=1617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.271184 sshd[1617]: Accepted publickey for core from 139.178.68.195 port 42824 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:53:56.271000 audit[1617]: CRED_ACQ pid=1617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.271000 audit[1617]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff33ef3570 a2=3 a3=0 items=0 ppid=1 pid=1617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.271000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:56.272418 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:56.275146 systemd-logind[1347]: New session 9 of user core. Aug 13 00:53:56.275556 systemd[1]: Started session-9.scope. Aug 13 00:53:56.277000 audit[1617]: USER_START pid=1617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.278000 audit[1622]: CRED_ACQ pid=1622 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:56.323000 audit[1623]: USER_ACCT pid=1623 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.323000 audit[1623]: CRED_REFR pid=1623 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.324206 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:53:56.324338 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:56.324000 audit[1623]: USER_START pid=1623 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.341831 systemd[1]: Starting docker.service... Aug 13 00:53:56.366303 env[1633]: time="2025-08-13T00:53:56.366273703Z" level=info msg="Starting up" Aug 13 00:53:56.367375 env[1633]: time="2025-08-13T00:53:56.367345332Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:56.367375 env[1633]: time="2025-08-13T00:53:56.367372696Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:56.367436 env[1633]: time="2025-08-13T00:53:56.367392422Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:56.367436 env[1633]: time="2025-08-13T00:53:56.367400311Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:56.368973 env[1633]: time="2025-08-13T00:53:56.368484524Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:56.368973 env[1633]: time="2025-08-13T00:53:56.368498750Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:56.368973 env[1633]: time="2025-08-13T00:53:56.368519131Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:56.368973 env[1633]: time="2025-08-13T00:53:56.368528766Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:56.388406 env[1633]: time="2025-08-13T00:53:56.388388096Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:53:56.388526 env[1633]: time="2025-08-13T00:53:56.388516491Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:53:56.388694 env[1633]: time="2025-08-13T00:53:56.388685341Z" level=info msg="Loading containers: start." Aug 13 00:53:56.436000 audit[1664]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.436000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe2a0c6d30 a2=0 a3=7ffe2a0c6d1c items=0 ppid=1633 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.436000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 00:53:56.437000 audit[1666]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.437000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe6c7b0e50 a2=0 a3=7ffe6c7b0e3c items=0 ppid=1633 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 00:53:56.439000 audit[1668]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.439000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1b570af0 a2=0 a3=7fff1b570adc items=0 ppid=1633 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.439000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:53:56.442000 audit[1670]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.442000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe2e410cd0 a2=0 a3=7ffe2e410cbc items=0 ppid=1633 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.442000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:53:56.443000 audit[1672]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.443000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0cda5340 a2=0 a3=7ffe0cda532c items=0 ppid=1633 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 00:53:56.457000 audit[1677]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.457000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe39c539e0 a2=0 a3=7ffe39c539cc items=0 ppid=1633 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.457000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 00:53:56.461000 audit[1679]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.461000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd859d7e50 a2=0 a3=7ffd859d7e3c items=0 ppid=1633 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.461000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 00:53:56.463000 audit[1681]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.463000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc6fa0b180 a2=0 a3=7ffc6fa0b16c items=0 ppid=1633 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.463000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 00:53:56.464000 audit[1683]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.464000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe0ee78af0 a2=0 a3=7ffe0ee78adc items=0 ppid=1633 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.464000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:56.469000 audit[1687]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.469000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffe20f0bd0 a2=0 a3=7fffe20f0bbc items=0 ppid=1633 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.469000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:56.473000 audit[1688]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.473000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd059da40 a2=0 a3=7fffd059da2c items=0 ppid=1633 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.473000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:56.493582 kernel: Initializing XFRM netlink socket Aug 13 00:53:56.584996 env[1633]: time="2025-08-13T00:53:56.584969786Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:53:56.623000 audit[1696]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.623000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff0ea213b0 a2=0 a3=7fff0ea2139c items=0 ppid=1633 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.623000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 00:53:56.633000 audit[1699]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.633000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe6b65e9d0 a2=0 a3=7ffe6b65e9bc items=0 ppid=1633 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.633000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 00:53:56.635000 audit[1702]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.635000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc1620c810 a2=0 a3=7ffc1620c7fc items=0 ppid=1633 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.635000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 00:53:56.637000 audit[1704]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.637000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe7b590440 a2=0 a3=7ffe7b59042c items=0 ppid=1633 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.637000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 00:53:56.639000 audit[1706]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.639000 audit[1706]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff66800f30 a2=0 a3=7fff66800f1c items=0 ppid=1633 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.639000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 00:53:56.640000 audit[1708]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.640000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc9f69aef0 a2=0 a3=7ffc9f69aedc items=0 ppid=1633 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.640000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 00:53:56.642000 audit[1710]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.642000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffccee20b0 a2=0 a3=7fffccee209c items=0 ppid=1633 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.642000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 00:53:56.650000 audit[1713]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.650000 audit[1713]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffa3f7e620 a2=0 a3=7fffa3f7e60c items=0 ppid=1633 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.650000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 00:53:56.651000 audit[1715]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.651000 audit[1715]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffeaa822cc0 a2=0 a3=7ffeaa822cac items=0 ppid=1633 pid=1715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.651000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:53:56.653000 audit[1717]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.653000 audit[1717]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe44ec4400 a2=0 a3=7ffe44ec43ec items=0 ppid=1633 pid=1717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.653000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:53:56.655000 audit[1719]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.655000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcd51e9810 a2=0 a3=7ffcd51e97fc items=0 ppid=1633 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.655000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 00:53:56.656386 systemd-networkd[1119]: docker0: Link UP Aug 13 00:53:56.660000 audit[1723]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.660000 audit[1723]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd69da7a30 a2=0 a3=7ffd69da7a1c items=0 ppid=1633 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.660000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:56.665000 audit[1724]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:53:56.665000 audit[1724]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd95629f00 a2=0 a3=7ffd95629eec items=0 ppid=1633 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:56.665000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:53:56.667057 env[1633]: time="2025-08-13T00:53:56.667033166Z" level=info msg="Loading containers: done." Aug 13 00:53:56.724640 env[1633]: time="2025-08-13T00:53:56.723803699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:53:56.724640 env[1633]: time="2025-08-13T00:53:56.723953984Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:53:56.724640 env[1633]: time="2025-08-13T00:53:56.724141846Z" level=info msg="Daemon has completed initialization" Aug 13 00:53:56.775663 systemd[1]: Started docker.service. Aug 13 00:53:56.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:56.782444 env[1633]: time="2025-08-13T00:53:56.782402250Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:53:58.462074 env[1359]: time="2025-08-13T00:53:58.462043472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:53:59.506591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:53:59.506750 systemd[1]: Stopped kubelet.service. Aug 13 00:53:59.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:59.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:59.508059 systemd[1]: Starting kubelet.service... Aug 13 00:53:59.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:59.620000 systemd[1]: Started kubelet.service. Aug 13 00:53:59.743833 kubelet[1765]: E0813 00:53:59.743803 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:59.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:53:59.745616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:59.745729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:59.929940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711036915.mount: Deactivated successfully. Aug 13 00:54:02.216253 env[1359]: time="2025-08-13T00:54:02.216192068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:02.247951 env[1359]: time="2025-08-13T00:54:02.247922141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:02.261572 env[1359]: time="2025-08-13T00:54:02.261543823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:02.279160 env[1359]: time="2025-08-13T00:54:02.279133162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:02.279629 env[1359]: time="2025-08-13T00:54:02.279608308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:54:02.280433 env[1359]: time="2025-08-13T00:54:02.280414273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:54:04.254632 env[1359]: time="2025-08-13T00:54:04.254584606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:04.283898 env[1359]: time="2025-08-13T00:54:04.283873974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:04.300782 env[1359]: time="2025-08-13T00:54:04.300756715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:04.322680 env[1359]: time="2025-08-13T00:54:04.322639785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:04.323052 env[1359]: time="2025-08-13T00:54:04.323019329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:54:04.323737 env[1359]: time="2025-08-13T00:54:04.323719763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:54:06.583433 env[1359]: time="2025-08-13T00:54:06.583399010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.606479 env[1359]: time="2025-08-13T00:54:06.606451856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.626154 env[1359]: time="2025-08-13T00:54:06.626126222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.642608 env[1359]: time="2025-08-13T00:54:06.642580546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:06.643312 env[1359]: time="2025-08-13T00:54:06.643296312Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:54:06.644155 env[1359]: time="2025-08-13T00:54:06.644134079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:54:09.227371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851346356.mount: Deactivated successfully. Aug 13 00:54:09.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:09.756575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:54:09.756697 systemd[1]: Stopped kubelet.service. Aug 13 00:54:09.757876 systemd[1]: Starting kubelet.service... Aug 13 00:54:09.766239 kernel: kauditd_printk_skb: 88 callbacks suppressed Aug 13 00:54:09.766293 kernel: audit: type=1130 audit(1755046449.756:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:09.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:09.773571 kernel: audit: type=1131 audit(1755046449.756:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:10.755292 env[1359]: time="2025-08-13T00:54:10.755247859Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.767810 env[1359]: time="2025-08-13T00:54:10.767779962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.778100 env[1359]: time="2025-08-13T00:54:10.777907931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.779683 systemd[1]: Started kubelet.service. Aug 13 00:54:10.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:10.784575 kernel: audit: type=1130 audit(1755046450.779:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:10.787208 env[1359]: time="2025-08-13T00:54:10.786710975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.787208 env[1359]: time="2025-08-13T00:54:10.786874183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:54:10.787739 env[1359]: time="2025-08-13T00:54:10.787566749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:54:10.836846 kubelet[1781]: E0813 00:54:10.836818 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:10.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:54:10.837906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:10.838003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:10.841576 kernel: audit: type=1131 audit(1755046450.837:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:54:11.168869 update_engine[1349]: I0813 00:54:11.168613 1349 update_attempter.cc:509] Updating boot flags... Aug 13 00:54:12.847099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394835662.mount: Deactivated successfully. Aug 13 00:54:14.605264 env[1359]: time="2025-08-13T00:54:14.605204574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:14.638952 env[1359]: time="2025-08-13T00:54:14.638926553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:14.656018 env[1359]: time="2025-08-13T00:54:14.655981550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:14.666775 env[1359]: time="2025-08-13T00:54:14.666750480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:14.667500 env[1359]: time="2025-08-13T00:54:14.667352270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:54:14.668068 env[1359]: time="2025-08-13T00:54:14.668047705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:54:15.329060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207378431.mount: Deactivated successfully. Aug 13 00:54:15.388851 env[1359]: time="2025-08-13T00:54:15.388739815Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:15.401324 env[1359]: time="2025-08-13T00:54:15.401289489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:15.402346 env[1359]: time="2025-08-13T00:54:15.402323732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:15.403386 env[1359]: time="2025-08-13T00:54:15.403361175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:15.403989 env[1359]: time="2025-08-13T00:54:15.403965758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:54:15.404376 env[1359]: time="2025-08-13T00:54:15.404344727Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:54:16.410665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527796555.mount: Deactivated successfully. Aug 13 00:54:20.203314 env[1359]: time="2025-08-13T00:54:20.203286918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.233006 env[1359]: time="2025-08-13T00:54:20.232986193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.242761 env[1359]: time="2025-08-13T00:54:20.242738209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.251610 env[1359]: time="2025-08-13T00:54:20.251586208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.252461 env[1359]: time="2025-08-13T00:54:20.252446432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:54:20.870767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:54:20.870917 systemd[1]: Stopped kubelet.service. Aug 13 00:54:20.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.872275 systemd[1]: Starting kubelet.service... Aug 13 00:54:20.876523 kernel: audit: type=1130 audit(1755046460.869:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.876579 kernel: audit: type=1131 audit(1755046460.869:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:22.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:54:22.632914 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:54:22.632989 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:54:22.633207 systemd[1]: Stopped kubelet.service. Aug 13 00:54:22.638215 systemd[1]: Starting kubelet.service... Aug 13 00:54:22.638570 kernel: audit: type=1130 audit(1755046462.631:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:54:22.656774 systemd[1]: Reloading. Aug 13 00:54:22.700709 /usr/lib/systemd/system-generators/torcx-generator[1854]: time="2025-08-13T00:54:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:22.700728 /usr/lib/systemd/system-generators/torcx-generator[1854]: time="2025-08-13T00:54:22Z" level=info msg="torcx already run" Aug 13 00:54:22.765635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:22.765747 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:22.777849 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:22.943704 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:54:22.943887 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:54:22.944337 systemd[1]: Stopped kubelet.service. Aug 13 00:54:22.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:54:22.946643 systemd[1]: Starting kubelet.service... Aug 13 00:54:22.948573 kernel: audit: type=1130 audit(1755046462.942:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:54:24.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:24.300555 systemd[1]: Started kubelet.service. Aug 13 00:54:24.306233 kernel: audit: type=1130 audit(1755046464.299:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:24.380224 kubelet[1928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:24.380473 kubelet[1928]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:24.380513 kubelet[1928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:24.380854 kubelet[1928]: I0813 00:54:24.380601 1928 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:24.663009 kubelet[1928]: I0813 00:54:24.662933 1928 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:54:24.663129 kubelet[1928]: I0813 00:54:24.663119 1928 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:24.663342 kubelet[1928]: I0813 00:54:24.663333 1928 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:54:24.689372 kubelet[1928]: E0813 00:54:24.689347 1928 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:24.691212 kubelet[1928]: I0813 00:54:24.691199 1928 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:24.702489 kubelet[1928]: E0813 00:54:24.702464 1928 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:24.702489 kubelet[1928]: I0813 00:54:24.702494 1928 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:24.705875 kubelet[1928]: I0813 00:54:24.705855 1928 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:24.706755 kubelet[1928]: I0813 00:54:24.706739 1928 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:54:24.706840 kubelet[1928]: I0813 00:54:24.706819 1928 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:24.706968 kubelet[1928]: I0813 00:54:24.706838 1928 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:54:24.707068 kubelet[1928]: I0813 00:54:24.706973 1928 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:24.707068 kubelet[1928]: I0813 00:54:24.706980 1928 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:54:24.707068 kubelet[1928]: I0813 00:54:24.707062 1928 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:24.716020 kubelet[1928]: I0813 00:54:24.715979 1928 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:54:24.716020 kubelet[1928]: I0813 00:54:24.716020 1928 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:24.716110 kubelet[1928]: I0813 00:54:24.716042 1928 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:54:24.716110 kubelet[1928]: I0813 00:54:24.716060 1928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:24.725280 kubelet[1928]: W0813 00:54:24.725252 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:24.725389 kubelet[1928]: E0813 00:54:24.725375 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:24.725475 kubelet[1928]: I0813 00:54:24.725466 1928 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:24.725766 kubelet[1928]: I0813 00:54:24.725758 1928 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:24.730174 kubelet[1928]: W0813 00:54:24.730164 1928 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:54:24.733484 kubelet[1928]: W0813 00:54:24.733445 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:24.733525 kubelet[1928]: E0813 00:54:24.733488 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:24.734446 kubelet[1928]: I0813 00:54:24.734438 1928 server.go:1274] "Started kubelet" Aug 13 00:54:24.739319 kernel: audit: type=1400 audit(1755046464.733:211): avc: denied { mac_admin } for pid=1928 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:24.740341 kernel: audit: type=1401 audit(1755046464.733:211): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:24.740366 kernel: audit: type=1300 audit(1755046464.733:211): arch=c000003e syscall=188 success=no exit=-22 a0=c0008a2d50 a1=c000695920 a2=c0008a2d20 a3=25 items=0 ppid=1 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.733000 audit[1928]: AVC avc: denied { mac_admin } for pid=1928 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:24.733000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:24.733000 audit[1928]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008a2d50 a1=c000695920 a2=c0008a2d20 a3=25 items=0 ppid=1 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.740519 kubelet[1928]: I0813 00:54:24.739353 1928 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:54:24.740519 kubelet[1928]: I0813 00:54:24.739379 1928 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:54:24.740519 kubelet[1928]: I0813 00:54:24.739437 1928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:24.743584 kernel: audit: type=1327 audit(1755046464.733:211): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:24.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:24.747187 kubelet[1928]: I0813 00:54:24.747162 1928 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:24.738000 audit[1928]: AVC avc: denied { mac_admin } for pid=1928 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:24.748525 kubelet[1928]: I0813 00:54:24.748515 1928 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:54:24.750015 kernel: audit: type=1400 audit(1755046464.738:212): avc: denied { mac_admin } for pid=1928 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:24.738000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:24.738000 audit[1928]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000978f20 a1=c000695938 a2=c0008a2de0 a3=25 items=0 ppid=1 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:24.742000 audit[1940]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.742000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb1685760 a2=0 a3=7ffdb168574c items=0 ppid=1928 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:54:24.742000 audit[1941]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.742000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebfcf0f20 a2=0 a3=7ffebfcf0f0c items=0 ppid=1928 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:54:24.753844 kubelet[1928]: E0813 00:54:24.752855 1928 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d7606ccbdb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:24.734412212 +0000 UTC m=+0.427384347,LastTimestamp:2025-08-13 00:54:24.734412212 +0000 UTC m=+0.427384347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:24.755027 kubelet[1928]: I0813 00:54:24.754705 1928 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:24.755027 kubelet[1928]: I0813 00:54:24.754825 1928 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:24.755027 kubelet[1928]: I0813 00:54:24.754930 1928 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:24.755193 kubelet[1928]: I0813 00:54:24.755186 1928 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:54:24.755350 kubelet[1928]: E0813 00:54:24.755339 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:24.757149 kubelet[1928]: E0813 00:54:24.756840 1928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Aug 13 00:54:24.757149 kubelet[1928]: E0813 00:54:24.756922 1928 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:24.757149 kubelet[1928]: I0813 00:54:24.756966 1928 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:24.757149 kubelet[1928]: I0813 00:54:24.756987 1928 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:54:24.757263 kubelet[1928]: W0813 00:54:24.757153 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:24.757263 kubelet[1928]: E0813 00:54:24.757180 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:24.757311 kubelet[1928]: I0813 00:54:24.757305 1928 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:24.757357 kubelet[1928]: I0813 00:54:24.757341 1928 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:24.754000 audit[1943]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.754000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf6d73a90 a2=0 a3=7ffcf6d73a7c items=0 ppid=1928 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:54:24.757898 kubelet[1928]: I0813 00:54:24.757836 1928 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:24.756000 audit[1945]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.756000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc2c34e560 a2=0 a3=7ffc2c34e54c items=0 ppid=1928 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:54:24.764000 audit[1948]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.764000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffe30c5b80 a2=0 a3=7fffe30c5b6c items=0 ppid=1928 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 00:54:24.770066 kubelet[1928]: I0813 00:54:24.770048 1928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:24.769000 audit[1951]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:24.769000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7471da90 a2=0 a3=7ffc7471da7c items=0 ppid=1928 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:54:24.771305 kubelet[1928]: I0813 00:54:24.771296 1928 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:24.771354 kubelet[1928]: I0813 00:54:24.771347 1928 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:54:24.771406 kubelet[1928]: I0813 00:54:24.771399 1928 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:54:24.771471 kubelet[1928]: E0813 00:54:24.771461 1928 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:24.770000 audit[1953]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.770000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6d941510 a2=0 a3=7ffe6d9414fc items=0 ppid=1928 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:54:24.771000 audit[1954]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.771000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccb88daf0 a2=0 a3=7ffccb88dadc items=0 ppid=1928 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:54:24.771000 audit[1955]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:24.771000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef1bfe540 a2=0 a3=7ffef1bfe52c items=0 ppid=1928 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:54:24.772000 audit[1956]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:24.772000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca81367b0 a2=0 a3=7ffca813679c items=0 ppid=1928 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:54:24.773000 audit[1957]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:24.773000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff6f99bda0 a2=0 a3=7fff6f99bd8c items=0 ppid=1928 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:54:24.773000 audit[1958]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:24.773000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffed4e8d970 a2=0 a3=7ffed4e8d95c items=0 ppid=1928 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:54:24.775693 kubelet[1928]: W0813 00:54:24.775670 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:24.775762 kubelet[1928]: E0813 00:54:24.775750 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:24.780961 kubelet[1928]: I0813 00:54:24.780949 1928 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:54:24.781035 kubelet[1928]: I0813 00:54:24.781026 1928 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:24.781091 kubelet[1928]: I0813 00:54:24.781084 1928 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:24.782003 kubelet[1928]: I0813 00:54:24.781995 1928 policy_none.go:49] "None policy: Start" Aug 13 00:54:24.782277 kubelet[1928]: I0813 00:54:24.782270 1928 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:54:24.782325 kubelet[1928]: I0813 00:54:24.782318 1928 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:24.787892 kubelet[1928]: I0813 00:54:24.787866 1928 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:24.786000 audit[1928]: AVC avc: denied { mac_admin } for pid=1928 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:24.786000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:24.786000 audit[1928]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000710360 a1=c000d74f48 a2=c000710330 a3=25 items=0 ppid=1 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:24.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:24.788124 kubelet[1928]: I0813 00:54:24.787934 1928 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:54:24.788124 kubelet[1928]: I0813 00:54:24.788017 1928 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:24.788124 kubelet[1928]: I0813 00:54:24.788028 1928 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:24.788625 kubelet[1928]: I0813 00:54:24.788615 1928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:24.789161 kubelet[1928]: E0813 00:54:24.789150 1928 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:54:24.889591 kubelet[1928]: I0813 00:54:24.889549 1928 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:24.889915 kubelet[1928]: E0813 00:54:24.889902 1928 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 00:54:24.960247 kubelet[1928]: E0813 00:54:24.957767 1928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Aug 13 00:54:25.058127 kubelet[1928]: I0813 00:54:25.058079 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:25.058245 kubelet[1928]: I0813 00:54:25.058136 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27d765c2dc9bd487f9a56c068328faaf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27d765c2dc9bd487f9a56c068328faaf\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:25.058245 kubelet[1928]: I0813 00:54:25.058156 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:25.058245 kubelet[1928]: I0813 00:54:25.058168 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:25.058245 kubelet[1928]: I0813 00:54:25.058180 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:25.058245 kubelet[1928]: I0813 00:54:25.058191 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:25.058369 kubelet[1928]: I0813 00:54:25.058202 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:25.058369 kubelet[1928]: I0813 00:54:25.058213 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27d765c2dc9bd487f9a56c068328faaf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d765c2dc9bd487f9a56c068328faaf\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:25.058369 kubelet[1928]: I0813 00:54:25.058225 1928 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27d765c2dc9bd487f9a56c068328faaf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d765c2dc9bd487f9a56c068328faaf\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:25.091313 kubelet[1928]: I0813 00:54:25.091296 1928 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:25.091665 kubelet[1928]: E0813 00:54:25.091639 1928 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 00:54:25.178172 env[1359]: time="2025-08-13T00:54:25.178132650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27d765c2dc9bd487f9a56c068328faaf,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:25.178478 env[1359]: time="2025-08-13T00:54:25.178458838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:25.178904 env[1359]: time="2025-08-13T00:54:25.178834021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:25.358222 kubelet[1928]: E0813 00:54:25.358190 1928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Aug 13 00:54:25.492870 kubelet[1928]: I0813 00:54:25.492849 1928 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:25.493239 kubelet[1928]: E0813 00:54:25.493044 1928 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 00:54:25.672841 kubelet[1928]: W0813 00:54:25.672572 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:25.672841 kubelet[1928]: E0813 00:54:25.672616 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:25.757474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123315479.mount: Deactivated successfully. Aug 13 00:54:25.760207 kubelet[1928]: W0813 00:54:25.760164 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:25.760207 kubelet[1928]: E0813 00:54:25.760191 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:25.760361 env[1359]: time="2025-08-13T00:54:25.760343561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.761082 env[1359]: time="2025-08-13T00:54:25.761061711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.761694 env[1359]: time="2025-08-13T00:54:25.761681788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.762853 env[1359]: time="2025-08-13T00:54:25.762839665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.763698 env[1359]: time="2025-08-13T00:54:25.763682377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.765299 env[1359]: time="2025-08-13T00:54:25.765281757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.767187 env[1359]: time="2025-08-13T00:54:25.767173891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.767992 env[1359]: time="2025-08-13T00:54:25.767970128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.770256 env[1359]: time="2025-08-13T00:54:25.770241371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.770706 env[1359]: time="2025-08-13T00:54:25.770693353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.771105 env[1359]: time="2025-08-13T00:54:25.771094826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.772775 env[1359]: time="2025-08-13T00:54:25.772752137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.799063 env[1359]: time="2025-08-13T00:54:25.791035447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:25.799063 env[1359]: time="2025-08-13T00:54:25.791062188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:25.799063 env[1359]: time="2025-08-13T00:54:25.791072107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:25.799063 env[1359]: time="2025-08-13T00:54:25.791162332Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85b06413e6eb8294d1f5a223a156ed382367f560d00c25fff3dfc8a8ef8109bb pid=1986 runtime=io.containerd.runc.v2 Aug 13 00:54:25.799295 env[1359]: time="2025-08-13T00:54:25.783788240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:25.799295 env[1359]: time="2025-08-13T00:54:25.783811368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:25.799295 env[1359]: time="2025-08-13T00:54:25.783818259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:25.799295 env[1359]: time="2025-08-13T00:54:25.783882373Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e5e7258f63e452ae11b825b068e1ea109adc0862dd48f4ad3f05797791756f4 pid=1971 runtime=io.containerd.runc.v2 Aug 13 00:54:25.826489 env[1359]: time="2025-08-13T00:54:25.822136066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:25.826489 env[1359]: time="2025-08-13T00:54:25.822166167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:25.826489 env[1359]: time="2025-08-13T00:54:25.822173215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:25.826489 env[1359]: time="2025-08-13T00:54:25.822258679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b2fd34df4f0849b45e776e8c4317cd8aa6e9f7cc00968a31e1e9f28a055e82b pid=2010 runtime=io.containerd.runc.v2 Aug 13 00:54:25.878588 env[1359]: time="2025-08-13T00:54:25.878553591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b2fd34df4f0849b45e776e8c4317cd8aa6e9f7cc00968a31e1e9f28a055e82b\"" Aug 13 00:54:25.881956 env[1359]: time="2025-08-13T00:54:25.881913013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27d765c2dc9bd487f9a56c068328faaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"85b06413e6eb8294d1f5a223a156ed382367f560d00c25fff3dfc8a8ef8109bb\"" Aug 13 00:54:25.882581 env[1359]: time="2025-08-13T00:54:25.882543649Z" level=info msg="CreateContainer within sandbox \"6b2fd34df4f0849b45e776e8c4317cd8aa6e9f7cc00968a31e1e9f28a055e82b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:54:25.885109 env[1359]: time="2025-08-13T00:54:25.885090927Z" level=info msg="CreateContainer within sandbox \"85b06413e6eb8294d1f5a223a156ed382367f560d00c25fff3dfc8a8ef8109bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:54:25.891082 env[1359]: time="2025-08-13T00:54:25.891060465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e5e7258f63e452ae11b825b068e1ea109adc0862dd48f4ad3f05797791756f4\"" Aug 13 00:54:25.892680 env[1359]: time="2025-08-13T00:54:25.892664659Z" level=info msg="CreateContainer within sandbox \"8e5e7258f63e452ae11b825b068e1ea109adc0862dd48f4ad3f05797791756f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:54:25.899620 env[1359]: time="2025-08-13T00:54:25.899595145Z" level=info msg="CreateContainer within sandbox \"6b2fd34df4f0849b45e776e8c4317cd8aa6e9f7cc00968a31e1e9f28a055e82b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63d0164770201e4d594899f3a1fc36f78cc79946c3f39bbcfaddb5a4c5c8895b\"" Aug 13 00:54:25.899757 env[1359]: time="2025-08-13T00:54:25.899742744Z" level=info msg="CreateContainer within sandbox \"85b06413e6eb8294d1f5a223a156ed382367f560d00c25fff3dfc8a8ef8109bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"918da76197d68cd6d5d72dac653bdf68138e23f43759b535e19bd1665610479d\"" Aug 13 00:54:25.900021 env[1359]: time="2025-08-13T00:54:25.900007245Z" level=info msg="StartContainer for \"63d0164770201e4d594899f3a1fc36f78cc79946c3f39bbcfaddb5a4c5c8895b\"" Aug 13 00:54:25.900122 env[1359]: time="2025-08-13T00:54:25.900104612Z" level=info msg="StartContainer for \"918da76197d68cd6d5d72dac653bdf68138e23f43759b535e19bd1665610479d\"" Aug 13 00:54:25.909362 env[1359]: time="2025-08-13T00:54:25.909335809Z" level=info msg="CreateContainer within sandbox \"8e5e7258f63e452ae11b825b068e1ea109adc0862dd48f4ad3f05797791756f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9\"" Aug 13 00:54:25.909731 env[1359]: time="2025-08-13T00:54:25.909718816Z" level=info msg="StartContainer for \"00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9\"" Aug 13 00:54:25.990892 env[1359]: time="2025-08-13T00:54:25.983101328Z" level=info msg="StartContainer for \"918da76197d68cd6d5d72dac653bdf68138e23f43759b535e19bd1665610479d\" returns successfully" Aug 13 00:54:25.998823 env[1359]: time="2025-08-13T00:54:25.998634336Z" level=info msg="StartContainer for \"63d0164770201e4d594899f3a1fc36f78cc79946c3f39bbcfaddb5a4c5c8895b\" returns successfully" Aug 13 00:54:25.999694 env[1359]: time="2025-08-13T00:54:25.999666587Z" level=info msg="StartContainer for \"00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9\" returns successfully" Aug 13 00:54:26.159608 kubelet[1928]: E0813 00:54:26.159575 1928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Aug 13 00:54:26.222424 kubelet[1928]: W0813 00:54:26.222377 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:26.222424 kubelet[1928]: E0813 00:54:26.222424 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:26.286613 kubelet[1928]: W0813 00:54:26.286538 1928 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Aug 13 00:54:26.286722 kubelet[1928]: E0813 00:54:26.286623 1928 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:26.294735 kubelet[1928]: I0813 00:54:26.294714 1928 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:26.294970 kubelet[1928]: E0813 00:54:26.294952 1928 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Aug 13 00:54:26.883813 kubelet[1928]: E0813 00:54:26.883785 1928 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:27.896793 kubelet[1928]: I0813 00:54:27.896759 1928 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:27.950898 kubelet[1928]: E0813 00:54:27.950872 1928 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:54:28.045545 kubelet[1928]: I0813 00:54:28.045526 1928 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:54:28.045677 kubelet[1928]: E0813 00:54:28.045668 1928 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:54:28.054574 kubelet[1928]: E0813 00:54:28.054541 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.155246 kubelet[1928]: E0813 00:54:28.155171 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.255425 kubelet[1928]: E0813 00:54:28.255397 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.355631 kubelet[1928]: E0813 00:54:28.355603 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.456221 kubelet[1928]: E0813 00:54:28.456146 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.556618 kubelet[1928]: E0813 00:54:28.556594 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.657064 kubelet[1928]: E0813 00:54:28.657035 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.758033 kubelet[1928]: E0813 00:54:28.758012 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.858133 kubelet[1928]: E0813 00:54:28.858103 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:28.958745 kubelet[1928]: E0813 00:54:28.958722 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:29.059672 kubelet[1928]: E0813 00:54:29.059599 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:29.160039 kubelet[1928]: E0813 00:54:29.159998 1928 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:29.737004 kubelet[1928]: I0813 00:54:29.736974 1928 apiserver.go:52] "Watching apiserver" Aug 13 00:54:29.757250 kubelet[1928]: I0813 00:54:29.757221 1928 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:54:29.837051 systemd[1]: Reloading. Aug 13 00:54:29.884099 /usr/lib/systemd/system-generators/torcx-generator[2216]: time="2025-08-13T00:54:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:29.884124 /usr/lib/systemd/system-generators/torcx-generator[2216]: time="2025-08-13T00:54:29Z" level=info msg="torcx already run" Aug 13 00:54:29.974874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:29.974893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:29.988227 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:30.058107 kubelet[1928]: I0813 00:54:30.058090 1928 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:30.058418 systemd[1]: Stopping kubelet.service... Aug 13 00:54:30.071978 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:30.072214 systemd[1]: Stopped kubelet.service. Aug 13 00:54:30.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:30.073005 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 00:54:30.073042 kernel: audit: type=1131 audit(1755046470.070:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:30.077052 systemd[1]: Starting kubelet.service... Aug 13 00:54:31.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:31.241687 systemd[1]: Started kubelet.service. Aug 13 00:54:31.245578 kernel: audit: type=1130 audit(1755046471.240:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:31.406942 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:31.406942 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:31.406942 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:31.407214 kubelet[2292]: I0813 00:54:31.406985 2292 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:31.413597 kubelet[2292]: I0813 00:54:31.413570 2292 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:54:31.413718 kubelet[2292]: I0813 00:54:31.413709 2292 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:31.413968 kubelet[2292]: I0813 00:54:31.413956 2292 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:54:31.414947 kubelet[2292]: I0813 00:54:31.414924 2292 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:54:31.416538 kubelet[2292]: I0813 00:54:31.416527 2292 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:31.423221 kubelet[2292]: E0813 00:54:31.423181 2292 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:31.423221 kubelet[2292]: I0813 00:54:31.423218 2292 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:31.434052 kubelet[2292]: I0813 00:54:31.434025 2292 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:31.434346 kubelet[2292]: I0813 00:54:31.434331 2292 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:54:31.434425 kubelet[2292]: I0813 00:54:31.434402 2292 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:31.434541 kubelet[2292]: I0813 00:54:31.434425 2292 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:54:31.434619 kubelet[2292]: I0813 00:54:31.434547 2292 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:31.434619 kubelet[2292]: I0813 00:54:31.434553 2292 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:54:31.434619 kubelet[2292]: I0813 00:54:31.434588 2292 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:31.434687 kubelet[2292]: I0813 00:54:31.434646 2292 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:54:31.434687 kubelet[2292]: I0813 00:54:31.434654 2292 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:31.434687 kubelet[2292]: I0813 00:54:31.434670 2292 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:54:31.434687 kubelet[2292]: I0813 00:54:31.434677 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:31.437794 kubelet[2292]: I0813 00:54:31.435523 2292 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:31.437794 kubelet[2292]: I0813 00:54:31.435793 2292 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:31.437794 kubelet[2292]: I0813 00:54:31.435999 2292 server.go:1274] "Started kubelet" Aug 13 00:54:31.437000 audit[2292]: AVC avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:31.437000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:31.442592 kernel: audit: type=1400 audit(1755046471.437:228): avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:31.443121 kernel: audit: type=1401 audit(1755046471.437:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:31.437000 audit[2292]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000735230 a1=c0006fc858 a2=c000735200 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.446979 kernel: audit: type=1300 audit(1755046471.437:228): arch=c000003e syscall=188 success=no exit=-22 a0=c000735230 a1=c0006fc858 a2=c000735200 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.447319 kubelet[2292]: I0813 00:54:31.447122 2292 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:54:31.448487 kernel: audit: type=1327 audit(1755046471.437:228): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:31.437000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:31.448665 kubelet[2292]: I0813 00:54:31.447461 2292 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:54:31.448665 kubelet[2292]: I0813 00:54:31.447489 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:31.452330 kubelet[2292]: I0813 00:54:31.452306 2292 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:31.455254 kubelet[2292]: I0813 00:54:31.453054 2292 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:54:31.455475 kubelet[2292]: I0813 00:54:31.455453 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:31.455945 kubelet[2292]: I0813 00:54:31.455934 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:31.445000 audit[2292]: AVC avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:31.458470 kubelet[2292]: I0813 00:54:31.458458 2292 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:54:31.458718 kubelet[2292]: E0813 00:54:31.458707 2292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:31.459248 kubelet[2292]: I0813 00:54:31.459241 2292 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:54:31.459418 kubelet[2292]: I0813 00:54:31.459412 2292 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:31.459579 kernel: audit: type=1400 audit(1755046471.445:229): avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:31.469208 kernel: audit: type=1401 audit(1755046471.445:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:31.469315 kernel: audit: type=1300 audit(1755046471.445:229): arch=c000003e syscall=188 success=no exit=-22 a0=c0008e1ce0 a1=c0006fc870 a2=c0007352c0 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.445000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:31.445000 audit[2292]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008e1ce0 a1=c0006fc870 a2=c0007352c0 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.469442 kubelet[2292]: I0813 00:54:31.464776 2292 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:31.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:31.473580 kernel: audit: type=1327 audit(1755046471.445:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:31.476821 kubelet[2292]: I0813 00:54:31.476807 2292 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:31.476973 kubelet[2292]: I0813 00:54:31.476961 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:31.484032 kubelet[2292]: I0813 00:54:31.484017 2292 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:31.509246 kubelet[2292]: I0813 00:54:31.509042 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:31.509784 kubelet[2292]: I0813 00:54:31.509764 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:31.509836 kubelet[2292]: I0813 00:54:31.509786 2292 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:54:31.509836 kubelet[2292]: I0813 00:54:31.509816 2292 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:54:31.509878 kubelet[2292]: E0813 00:54:31.509847 2292 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:31.553765 kubelet[2292]: I0813 00:54:31.553696 2292 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:54:31.553765 kubelet[2292]: I0813 00:54:31.553711 2292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:31.553765 kubelet[2292]: I0813 00:54:31.553727 2292 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:31.553934 kubelet[2292]: I0813 00:54:31.553838 2292 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:54:31.553934 kubelet[2292]: I0813 00:54:31.553846 2292 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:54:31.553934 kubelet[2292]: I0813 00:54:31.553860 2292 policy_none.go:49] "None policy: Start" Aug 13 00:54:31.554323 kubelet[2292]: I0813 00:54:31.554311 2292 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:54:31.554323 kubelet[2292]: I0813 00:54:31.554324 2292 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:31.554415 kubelet[2292]: I0813 00:54:31.554400 2292 state_mem.go:75] "Updated machine memory state" Aug 13 00:54:31.555140 kubelet[2292]: I0813 00:54:31.555125 2292 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:31.553000 audit[2292]: AVC avc: denied { mac_admin } for pid=2292 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:54:31.553000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:54:31.553000 audit[2292]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d10d20 a1=c000abd8a8 a2=c000d10a50 a3=25 items=0 ppid=1 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:31.553000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:54:31.555337 kubelet[2292]: I0813 00:54:31.555166 2292 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:54:31.555337 kubelet[2292]: I0813 00:54:31.555252 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:31.555337 kubelet[2292]: I0813 00:54:31.555258 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:31.555861 kubelet[2292]: I0813 00:54:31.555849 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:31.622937 kubelet[2292]: E0813 00:54:31.622898 2292 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:31.657826 kubelet[2292]: I0813 00:54:31.657805 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:31.667111 kubelet[2292]: I0813 00:54:31.667059 2292 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:54:31.667269 kubelet[2292]: I0813 00:54:31.667125 2292 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:54:31.760701 kubelet[2292]: I0813 00:54:31.760609 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:31.760836 kubelet[2292]: I0813 00:54:31.760819 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:31.760937 kubelet[2292]: I0813 00:54:31.760923 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:31.761015 kubelet[2292]: I0813 00:54:31.761002 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:31.761098 kubelet[2292]: I0813 00:54:31.761088 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:31.761166 kubelet[2292]: I0813 00:54:31.761157 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27d765c2dc9bd487f9a56c068328faaf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d765c2dc9bd487f9a56c068328faaf\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:31.761252 kubelet[2292]: I0813 00:54:31.761242 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27d765c2dc9bd487f9a56c068328faaf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27d765c2dc9bd487f9a56c068328faaf\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:31.761315 kubelet[2292]: I0813 00:54:31.761299 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27d765c2dc9bd487f9a56c068328faaf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27d765c2dc9bd487f9a56c068328faaf\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:31.761377 kubelet[2292]: I0813 00:54:31.761368 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:32.435408 kubelet[2292]: I0813 00:54:32.435380 2292 apiserver.go:52] "Watching apiserver" Aug 13 00:54:32.460095 kubelet[2292]: I0813 00:54:32.460076 2292 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:54:32.547113 kubelet[2292]: I0813 00:54:32.546729 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.546701844 podStartE2EDuration="3.546701844s" podCreationTimestamp="2025-08-13 00:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:32.546690665 +0000 UTC m=+1.203368329" watchObservedRunningTime="2025-08-13 00:54:32.546701844 +0000 UTC m=+1.203379504" Aug 13 00:54:32.550665 kubelet[2292]: I0813 00:54:32.550613 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.550602825 podStartE2EDuration="1.550602825s" podCreationTimestamp="2025-08-13 00:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:32.549920188 +0000 UTC m=+1.206597857" watchObservedRunningTime="2025-08-13 00:54:32.550602825 +0000 UTC m=+1.207280486" Aug 13 00:54:34.047193 kubelet[2292]: I0813 00:54:34.047158 2292 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:54:34.047479 kubelet[2292]: I0813 00:54:34.047455 2292 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:54:34.047515 env[1359]: time="2025-08-13T00:54:34.047365238Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:54:34.603827 kubelet[2292]: I0813 00:54:34.603795 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.603771238 podStartE2EDuration="3.603771238s" podCreationTimestamp="2025-08-13 00:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:32.553557529 +0000 UTC m=+1.210235193" watchObservedRunningTime="2025-08-13 00:54:34.603771238 +0000 UTC m=+3.260448898" Aug 13 00:54:34.681373 kubelet[2292]: I0813 00:54:34.681331 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0206b727-332c-4ae4-ab2f-5a7ba79541aa-xtables-lock\") pod \"kube-proxy-gv5p9\" (UID: \"0206b727-332c-4ae4-ab2f-5a7ba79541aa\") " pod="kube-system/kube-proxy-gv5p9" Aug 13 00:54:34.681611 kubelet[2292]: I0813 00:54:34.681557 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqll\" (UniqueName: \"kubernetes.io/projected/0206b727-332c-4ae4-ab2f-5a7ba79541aa-kube-api-access-wxqll\") pod \"kube-proxy-gv5p9\" (UID: \"0206b727-332c-4ae4-ab2f-5a7ba79541aa\") " pod="kube-system/kube-proxy-gv5p9" Aug 13 00:54:34.681734 kubelet[2292]: I0813 00:54:34.681721 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0206b727-332c-4ae4-ab2f-5a7ba79541aa-kube-proxy\") pod \"kube-proxy-gv5p9\" (UID: \"0206b727-332c-4ae4-ab2f-5a7ba79541aa\") " pod="kube-system/kube-proxy-gv5p9" Aug 13 00:54:34.681852 kubelet[2292]: I0813 00:54:34.681818 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0206b727-332c-4ae4-ab2f-5a7ba79541aa-lib-modules\") pod \"kube-proxy-gv5p9\" (UID: \"0206b727-332c-4ae4-ab2f-5a7ba79541aa\") " pod="kube-system/kube-proxy-gv5p9" Aug 13 00:54:34.789698 kubelet[2292]: E0813 00:54:34.789664 2292 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:54:34.789856 kubelet[2292]: E0813 00:54:34.789845 2292 projected.go:194] Error preparing data for projected volume kube-api-access-wxqll for pod kube-system/kube-proxy-gv5p9: configmap "kube-root-ca.crt" not found Aug 13 00:54:34.789989 kubelet[2292]: E0813 00:54:34.789970 2292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0206b727-332c-4ae4-ab2f-5a7ba79541aa-kube-api-access-wxqll podName:0206b727-332c-4ae4-ab2f-5a7ba79541aa nodeName:}" failed. No retries permitted until 2025-08-13 00:54:35.289950277 +0000 UTC m=+3.946627943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wxqll" (UniqueName: "kubernetes.io/projected/0206b727-332c-4ae4-ab2f-5a7ba79541aa-kube-api-access-wxqll") pod "kube-proxy-gv5p9" (UID: "0206b727-332c-4ae4-ab2f-5a7ba79541aa") : configmap "kube-root-ca.crt" not found Aug 13 00:54:35.083614 kubelet[2292]: I0813 00:54:35.083577 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d33fb06a-5573-493d-86a1-a19af3be8f4e-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-n2q2m\" (UID: \"d33fb06a-5573-493d-86a1-a19af3be8f4e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-n2q2m" Aug 13 00:54:35.083614 kubelet[2292]: I0813 00:54:35.083615 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjcv\" (UniqueName: \"kubernetes.io/projected/d33fb06a-5573-493d-86a1-a19af3be8f4e-kube-api-access-qmjcv\") pod \"tigera-operator-5bf8dfcb4-n2q2m\" (UID: \"d33fb06a-5573-493d-86a1-a19af3be8f4e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-n2q2m" Aug 13 00:54:35.189237 kubelet[2292]: I0813 00:54:35.189211 2292 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:54:35.359726 env[1359]: time="2025-08-13T00:54:35.359652386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-n2q2m,Uid:d33fb06a-5573-493d-86a1-a19af3be8f4e,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:54:35.378636 env[1359]: time="2025-08-13T00:54:35.378492683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:35.378636 env[1359]: time="2025-08-13T00:54:35.378516524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:35.378636 env[1359]: time="2025-08-13T00:54:35.378526576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:35.378807 env[1359]: time="2025-08-13T00:54:35.378654217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20b65a04014357ce4fb63d5ca0b56565bc8a88079e370b2b3007e69c08e48f90 pid=2341 runtime=io.containerd.runc.v2 Aug 13 00:54:35.434172 env[1359]: time="2025-08-13T00:54:35.434142476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-n2q2m,Uid:d33fb06a-5573-493d-86a1-a19af3be8f4e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"20b65a04014357ce4fb63d5ca0b56565bc8a88079e370b2b3007e69c08e48f90\"" Aug 13 00:54:35.436332 env[1359]: time="2025-08-13T00:54:35.436309544Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:54:35.510897 env[1359]: time="2025-08-13T00:54:35.510871341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gv5p9,Uid:0206b727-332c-4ae4-ab2f-5a7ba79541aa,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:35.552072 env[1359]: time="2025-08-13T00:54:35.551949071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:35.552072 env[1359]: time="2025-08-13T00:54:35.551976120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:35.552072 env[1359]: time="2025-08-13T00:54:35.551983443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:35.552342 env[1359]: time="2025-08-13T00:54:35.552312639Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b71073216fdcaaf73ea92dee2768d66482a2a303bf6fa86f058c95d15d5e0f4b pid=2387 runtime=io.containerd.runc.v2 Aug 13 00:54:35.583171 env[1359]: time="2025-08-13T00:54:35.583145596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gv5p9,Uid:0206b727-332c-4ae4-ab2f-5a7ba79541aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"b71073216fdcaaf73ea92dee2768d66482a2a303bf6fa86f058c95d15d5e0f4b\"" Aug 13 00:54:35.585243 env[1359]: time="2025-08-13T00:54:35.585098510Z" level=info msg="CreateContainer within sandbox \"b71073216fdcaaf73ea92dee2768d66482a2a303bf6fa86f058c95d15d5e0f4b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:54:35.603610 env[1359]: time="2025-08-13T00:54:35.603569782Z" level=info msg="CreateContainer within sandbox \"b71073216fdcaaf73ea92dee2768d66482a2a303bf6fa86f058c95d15d5e0f4b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04e5befd159f754e4a0907170ab16839f0a63dbfda3ee3dd6bcb73fa919052cb\"" Aug 13 00:54:35.604677 env[1359]: time="2025-08-13T00:54:35.604639225Z" level=info msg="StartContainer for \"04e5befd159f754e4a0907170ab16839f0a63dbfda3ee3dd6bcb73fa919052cb\"" Aug 13 00:54:35.646439 env[1359]: time="2025-08-13T00:54:35.646357701Z" level=info msg="StartContainer for \"04e5befd159f754e4a0907170ab16839f0a63dbfda3ee3dd6bcb73fa919052cb\" returns successfully" Aug 13 00:54:36.559699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706811678.mount: Deactivated successfully. Aug 13 00:54:36.751583 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:54:36.751688 kernel: audit: type=1325 audit(1755046476.745:231): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.751716 kernel: audit: type=1300 audit(1755046476.745:231): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddcab9800 a2=0 a3=7ffddcab97ec items=0 ppid=2438 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.745000 audit[2486]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.745000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddcab9800 a2=0 a3=7ffddcab97ec items=0 ppid=2438 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:36.756326 kernel: audit: type=1327 audit(1755046476.745:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:36.753000 audit[2487]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.758469 kernel: audit: type=1325 audit(1755046476.753:232): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.753000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8724edb0 a2=0 a3=7fff8724ed9c items=0 ppid=2438 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:54:36.766035 kernel: audit: type=1300 audit(1755046476.753:232): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8724edb0 a2=0 a3=7fff8724ed9c items=0 ppid=2438 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.766085 kernel: audit: type=1327 audit(1755046476.753:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:54:36.766103 kernel: audit: type=1325 audit(1755046476.757:233): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.757000 audit[2488]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.767839 kernel: audit: type=1300 audit(1755046476.757:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaba96060 a2=0 a3=7ffdaba9604c items=0 ppid=2438 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.757000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaba96060 a2=0 a3=7ffdaba9604c items=0 ppid=2438 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:54:36.774620 kernel: audit: type=1327 audit(1755046476.757:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:54:36.758000 audit[2489]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:36.758000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfe80ef30 a2=0 a3=7ffcfe80ef1c items=0 ppid=2438 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:54:36.773000 audit[2490]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:36.773000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc46cafca0 a2=0 a3=7ffc46cafc8c items=0 ppid=2438 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:54:36.777590 kernel: audit: type=1325 audit(1755046476.758:234): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:36.778000 audit[2491]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:36.778000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6f4dde90 a2=0 a3=7ffe6f4dde7c items=0 ppid=2438 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:54:36.899000 audit[2492]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.899000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc38f1f840 a2=0 a3=7ffc38f1f82c items=0 ppid=2438 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:54:36.906000 audit[2494]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.906000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb1e16ba0 a2=0 a3=7ffeb1e16b8c items=0 ppid=2438 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 00:54:36.909000 audit[2497]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.909000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdf098c190 a2=0 a3=7ffdf098c17c items=0 ppid=2438 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 00:54:36.911000 audit[2498]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.911000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff56acd270 a2=0 a3=7fff56acd25c items=0 ppid=2438 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:54:36.914000 audit[2500]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.914000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc7994840 a2=0 a3=7fffc799482c items=0 ppid=2438 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:54:36.915000 audit[2501]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.915000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe279a7470 a2=0 a3=7ffe279a745c items=0 ppid=2438 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:54:36.917000 audit[2503]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.917000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb1707390 a2=0 a3=7fffb170737c items=0 ppid=2438 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.917000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:54:36.921000 audit[2506]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.921000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff21ce3d00 a2=0 a3=7fff21ce3cec items=0 ppid=2438 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 00:54:36.922000 audit[2507]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.922000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffaf36e280 a2=0 a3=7fffaf36e26c items=0 ppid=2438 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:54:36.924000 audit[2509]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.924000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec4311430 a2=0 a3=7ffec431141c items=0 ppid=2438 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:54:36.925000 audit[2510]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.925000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff3e4a960 a2=0 a3=7ffff3e4a94c items=0 ppid=2438 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:54:36.927000 audit[2512]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.927000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe025716b0 a2=0 a3=7ffe0257169c items=0 ppid=2438 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:54:36.930000 audit[2515]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.930000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeab59aab0 a2=0 a3=7ffeab59aa9c items=0 ppid=2438 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:54:36.933000 audit[2518]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.933000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe57674bb0 a2=0 a3=7ffe57674b9c items=0 ppid=2438 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:54:36.934000 audit[2519]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.934000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3ee95e30 a2=0 a3=7ffc3ee95e1c items=0 ppid=2438 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:54:36.938000 audit[2521]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.938000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc66593600 a2=0 a3=7ffc665935ec items=0 ppid=2438 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:36.941000 audit[2524]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.941000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd544df600 a2=0 a3=7ffd544df5ec items=0 ppid=2438 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:36.945000 audit[2525]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.945000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc8113210 a2=0 a3=7ffdc81131fc items=0 ppid=2438 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:54:36.948000 audit[2527]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:54:36.948000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcac6fd2a0 a2=0 a3=7ffcac6fd28c items=0 ppid=2438 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:54:36.985000 audit[2533]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:36.985000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe6cad5a00 a2=0 a3=7ffe6cad59ec items=0 ppid=2438 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:36.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:37.001000 audit[2533]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:37.001000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe6cad5a00 a2=0 a3=7ffe6cad59ec items=0 ppid=2438 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:37.002000 audit[2538]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.002000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe2f7316d0 a2=0 a3=7ffe2f7316bc items=0 ppid=2438 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:54:37.004000 audit[2540]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.004000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc2c256520 a2=0 a3=7ffc2c25650c items=0 ppid=2438 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 00:54:37.007000 audit[2543]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.007000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc45b906d0 a2=0 a3=7ffc45b906bc items=0 ppid=2438 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 00:54:37.008000 audit[2544]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.008000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe30a213a0 a2=0 a3=7ffe30a2138c items=0 ppid=2438 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:54:37.009000 audit[2546]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.009000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe81aa4af0 a2=0 a3=7ffe81aa4adc items=0 ppid=2438 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:54:37.010000 audit[2547]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.010000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe438419c0 a2=0 a3=7ffe438419ac items=0 ppid=2438 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:54:37.011000 audit[2549]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.011000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff29970520 a2=0 a3=7fff2997050c items=0 ppid=2438 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 00:54:37.014000 audit[2552]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.014000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffea7f36130 a2=0 a3=7ffea7f3611c items=0 ppid=2438 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:54:37.014000 audit[2553]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.014000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7bf34a80 a2=0 a3=7fff7bf34a6c items=0 ppid=2438 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:54:37.016000 audit[2555]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.016000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe68c33450 a2=0 a3=7ffe68c3343c items=0 ppid=2438 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:54:37.017000 audit[2556]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.017000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfff8c150 a2=0 a3=7ffcfff8c13c items=0 ppid=2438 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:54:37.019000 audit[2558]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.019000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe56a5da50 a2=0 a3=7ffe56a5da3c items=0 ppid=2438 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:54:37.023000 audit[2561]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.023000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe896af600 a2=0 a3=7ffe896af5ec items=0 ppid=2438 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:54:37.025000 audit[2564]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.025000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0e47a500 a2=0 a3=7fff0e47a4ec items=0 ppid=2438 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 00:54:37.026000 audit[2565]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.026000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf2ce0d60 a2=0 a3=7ffcf2ce0d4c items=0 ppid=2438 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:54:37.028000 audit[2567]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.028000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdcecbf530 a2=0 a3=7ffdcecbf51c items=0 ppid=2438 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:37.031000 audit[2570]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.031000 audit[2570]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff03584e40 a2=0 a3=7fff03584e2c items=0 ppid=2438 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:54:37.031000 audit[2571]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.031000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd77031e0 a2=0 a3=7fffd77031cc items=0 ppid=2438 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:54:37.033000 audit[2573]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.033000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd95b7ae60 a2=0 a3=7ffd95b7ae4c items=0 ppid=2438 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:54:37.034000 audit[2574]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.034000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd38b2d260 a2=0 a3=7ffd38b2d24c items=0 ppid=2438 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:54:37.037000 audit[2576]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.037000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffc000e290 a2=0 a3=7fffc000e27c items=0 ppid=2438 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:54:37.039000 audit[2579]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:54:37.039000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff4076d00 a2=0 a3=7ffff4076cec items=0 ppid=2438 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:54:37.041000 audit[2581]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:54:37.041000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc3de67e60 a2=0 a3=7ffc3de67e4c items=0 ppid=2438 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.041000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:37.042000 audit[2581]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:54:37.042000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc3de67e60 a2=0 a3=7ffc3de67e4c items=0 ppid=2438 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:37.042000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:38.324166 env[1359]: time="2025-08-13T00:54:38.324115852Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.342199 env[1359]: time="2025-08-13T00:54:38.341515145Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.353482 env[1359]: time="2025-08-13T00:54:38.353448338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.362224 env[1359]: time="2025-08-13T00:54:38.362184194Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.362868 env[1359]: time="2025-08-13T00:54:38.362843792Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:54:38.366624 env[1359]: time="2025-08-13T00:54:38.366590926Z" level=info msg="CreateContainer within sandbox \"20b65a04014357ce4fb63d5ca0b56565bc8a88079e370b2b3007e69c08e48f90\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:54:38.406981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199672585.mount: Deactivated successfully. Aug 13 00:54:38.412007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209329215.mount: Deactivated successfully. Aug 13 00:54:38.434322 env[1359]: time="2025-08-13T00:54:38.434293958Z" level=info msg="CreateContainer within sandbox \"20b65a04014357ce4fb63d5ca0b56565bc8a88079e370b2b3007e69c08e48f90\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2c07a8aeac647413a74d1a919f3e224cebd5ee408857c2b9c5b910da01972abc\"" Aug 13 00:54:38.435546 env[1359]: time="2025-08-13T00:54:38.435040983Z" level=info msg="StartContainer for \"2c07a8aeac647413a74d1a919f3e224cebd5ee408857c2b9c5b910da01972abc\"" Aug 13 00:54:38.502846 env[1359]: time="2025-08-13T00:54:38.502814519Z" level=info msg="StartContainer for \"2c07a8aeac647413a74d1a919f3e224cebd5ee408857c2b9c5b910da01972abc\" returns successfully" Aug 13 00:54:38.585048 kubelet[2292]: I0813 00:54:38.584850 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gv5p9" podStartSLOduration=4.5848357570000005 podStartE2EDuration="4.584835757s" podCreationTimestamp="2025-08-13 00:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:36.553683877 +0000 UTC m=+5.210361541" watchObservedRunningTime="2025-08-13 00:54:38.584835757 +0000 UTC m=+7.241513425" Aug 13 00:54:43.132812 kubelet[2292]: I0813 00:54:43.132773 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-n2q2m" podStartSLOduration=5.203924327 podStartE2EDuration="8.132761778s" podCreationTimestamp="2025-08-13 00:54:35 +0000 UTC" firstStartedPulling="2025-08-13 00:54:35.435210997 +0000 UTC m=+4.091888657" lastFinishedPulling="2025-08-13 00:54:38.364048446 +0000 UTC m=+7.020726108" observedRunningTime="2025-08-13 00:54:38.585640411 +0000 UTC m=+7.242318080" watchObservedRunningTime="2025-08-13 00:54:43.132761778 +0000 UTC m=+11.789439446" Aug 13 00:54:44.778000 audit[2661]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.787578 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 00:54:44.787687 kernel: audit: type=1325 audit(1755046484.778:282): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.778000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd3809c230 a2=0 a3=7ffd3809c21c items=0 ppid=2438 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.799585 kernel: audit: type=1300 audit(1755046484.778:282): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd3809c230 a2=0 a3=7ffd3809c21c items=0 ppid=2438 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.807602 kernel: audit: type=1327 audit(1755046484.778:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.816000 audit[2661]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.821600 kernel: audit: type=1325 audit(1755046484.816:283): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.816000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3809c230 a2=0 a3=0 items=0 ppid=2438 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.826620 kernel: audit: type=1300 audit(1755046484.816:283): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3809c230 a2=0 a3=0 items=0 ppid=2438 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.830592 kernel: audit: type=1327 audit(1755046484.816:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.842000 audit[2664]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.847587 kernel: audit: type=1325 audit(1755046484.842:284): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.842000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa2c40e40 a2=0 a3=7fffa2c40e2c items=0 ppid=2438 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.854583 kernel: audit: type=1300 audit(1755046484.842:284): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa2c40e40 a2=0 a3=7fffa2c40e2c items=0 ppid=2438 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.856977 kernel: audit: type=1327 audit(1755046484.842:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.846000 audit[2664]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.860576 kernel: audit: type=1325 audit(1755046484.846:285): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:44.846000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa2c40e40 a2=0 a3=0 items=0 ppid=2438 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:44.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:44.871214 sudo[1623]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:44.869000 audit[1623]: USER_END pid=1623 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:54:44.869000 audit[1623]: CRED_DISP pid=1623 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:54:44.894899 sshd[1617]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:44.901000 audit[1617]: USER_END pid=1617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:44.901000 audit[1617]: CRED_DISP pid=1617 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:44.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.105:22-139.178.68.195:42824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:44.914466 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:42824.service: Deactivated successfully. Aug 13 00:54:44.915076 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:54:44.916329 systemd-logind[1347]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:54:44.917040 systemd-logind[1347]: Removed session 9. Aug 13 00:54:46.826000 audit[2669]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:46.826000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc26c256c0 a2=0 a3=7ffc26c256ac items=0 ppid=2438 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:46.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:46.832000 audit[2669]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:46.832000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc26c256c0 a2=0 a3=0 items=0 ppid=2438 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:46.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:46.840000 audit[2671]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:46.840000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeb7fb6ef0 a2=0 a3=7ffeb7fb6edc items=0 ppid=2438 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:46.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:46.845000 audit[2671]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:46.845000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb7fb6ef0 a2=0 a3=0 items=0 ppid=2438 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:46.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:47.079479 kubelet[2292]: I0813 00:54:47.079361 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a0568b-59e6-4413-b866-4619a577457d-tigera-ca-bundle\") pod \"calico-typha-7f896f87b7-xkhm4\" (UID: \"65a0568b-59e6-4413-b866-4619a577457d\") " pod="calico-system/calico-typha-7f896f87b7-xkhm4" Aug 13 00:54:47.079888 kubelet[2292]: I0813 00:54:47.079868 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65a0568b-59e6-4413-b866-4619a577457d-typha-certs\") pod \"calico-typha-7f896f87b7-xkhm4\" (UID: \"65a0568b-59e6-4413-b866-4619a577457d\") " pod="calico-system/calico-typha-7f896f87b7-xkhm4" Aug 13 00:54:47.080001 kubelet[2292]: I0813 00:54:47.079982 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vq4p\" (UniqueName: \"kubernetes.io/projected/65a0568b-59e6-4413-b866-4619a577457d-kube-api-access-6vq4p\") pod \"calico-typha-7f896f87b7-xkhm4\" (UID: \"65a0568b-59e6-4413-b866-4619a577457d\") " pod="calico-system/calico-typha-7f896f87b7-xkhm4" Aug 13 00:54:47.216230 env[1359]: time="2025-08-13T00:54:47.216185681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f896f87b7-xkhm4,Uid:65a0568b-59e6-4413-b866-4619a577457d,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:47.328603 env[1359]: time="2025-08-13T00:54:47.327747166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:47.328603 env[1359]: time="2025-08-13T00:54:47.327775938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:47.328603 env[1359]: time="2025-08-13T00:54:47.327782885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:47.328603 env[1359]: time="2025-08-13T00:54:47.327875139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbc129efe86b31dc26165f9482a8f975cad592fdfd9c0128fe69d75908fa1e77 pid=2681 runtime=io.containerd.runc.v2 Aug 13 00:54:47.386441 kubelet[2292]: I0813 00:54:47.386240 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-cni-log-dir\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386441 kubelet[2292]: I0813 00:54:47.386285 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-cni-net-dir\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386441 kubelet[2292]: I0813 00:54:47.386298 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-lib-modules\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386441 kubelet[2292]: I0813 00:54:47.386311 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcjj\" (UniqueName: \"kubernetes.io/projected/34e366d1-86a6-4e03-b620-96eef9460d77-kube-api-access-4pcjj\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386441 kubelet[2292]: I0813 00:54:47.386335 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34e366d1-86a6-4e03-b620-96eef9460d77-node-certs\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386734 kubelet[2292]: I0813 00:54:47.386347 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e366d1-86a6-4e03-b620-96eef9460d77-tigera-ca-bundle\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386734 kubelet[2292]: I0813 00:54:47.386358 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-flexvol-driver-host\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386734 kubelet[2292]: I0813 00:54:47.386386 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-policysync\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386734 kubelet[2292]: I0813 00:54:47.386422 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-var-run-calico\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386734 kubelet[2292]: I0813 00:54:47.386457 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-xtables-lock\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386903 kubelet[2292]: I0813 00:54:47.386477 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-cni-bin-dir\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.386903 kubelet[2292]: I0813 00:54:47.386490 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34e366d1-86a6-4e03-b620-96eef9460d77-var-lib-calico\") pod \"calico-node-5tj8c\" (UID: \"34e366d1-86a6-4e03-b620-96eef9460d77\") " pod="calico-system/calico-node-5tj8c" Aug 13 00:54:47.431406 env[1359]: time="2025-08-13T00:54:47.431372238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f896f87b7-xkhm4,Uid:65a0568b-59e6-4413-b866-4619a577457d,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbc129efe86b31dc26165f9482a8f975cad592fdfd9c0128fe69d75908fa1e77\"" Aug 13 00:54:47.432481 env[1359]: time="2025-08-13T00:54:47.432450534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:54:47.493960 kubelet[2292]: E0813 00:54:47.493874 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:54:47.514209 kubelet[2292]: E0813 00:54:47.514186 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.514371 kubelet[2292]: W0813 00:54:47.514358 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.514456 kubelet[2292]: E0813 00:54:47.514445 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.552185 env[1359]: time="2025-08-13T00:54:47.552146261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5tj8c,Uid:34e366d1-86a6-4e03-b620-96eef9460d77,Namespace:calico-system,Attempt:0,}" Aug 13 00:54:47.583871 env[1359]: time="2025-08-13T00:54:47.583075235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:47.583871 env[1359]: time="2025-08-13T00:54:47.583129496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:47.583871 env[1359]: time="2025-08-13T00:54:47.583142822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:47.583871 env[1359]: time="2025-08-13T00:54:47.583278616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844 pid=2731 runtime=io.containerd.runc.v2 Aug 13 00:54:47.588430 kubelet[2292]: E0813 00:54:47.588394 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.588538 kubelet[2292]: W0813 00:54:47.588435 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.588538 kubelet[2292]: E0813 00:54:47.588456 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.588538 kubelet[2292]: I0813 00:54:47.588483 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7dd644a-3303-4570-9359-66c16da8794d-registration-dir\") pod \"csi-node-driver-q2fnk\" (UID: \"f7dd644a-3303-4570-9359-66c16da8794d\") " pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:54:47.588916 kubelet[2292]: E0813 00:54:47.588897 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.588989 kubelet[2292]: W0813 00:54:47.588910 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.588989 kubelet[2292]: E0813 00:54:47.588951 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.588989 kubelet[2292]: I0813 00:54:47.588986 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7dd644a-3303-4570-9359-66c16da8794d-kubelet-dir\") pod \"csi-node-driver-q2fnk\" (UID: \"f7dd644a-3303-4570-9359-66c16da8794d\") " pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:54:47.589195 kubelet[2292]: E0813 00:54:47.589177 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.589246 kubelet[2292]: W0813 00:54:47.589199 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.589246 kubelet[2292]: E0813 00:54:47.589215 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.589246 kubelet[2292]: I0813 00:54:47.589231 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7dd644a-3303-4570-9359-66c16da8794d-socket-dir\") pod \"csi-node-driver-q2fnk\" (UID: \"f7dd644a-3303-4570-9359-66c16da8794d\") " pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:54:47.589884 kubelet[2292]: E0813 00:54:47.589412 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.589884 kubelet[2292]: W0813 00:54:47.589425 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.589884 kubelet[2292]: E0813 00:54:47.589436 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.589884 kubelet[2292]: I0813 00:54:47.589455 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ggm\" (UniqueName: \"kubernetes.io/projected/f7dd644a-3303-4570-9359-66c16da8794d-kube-api-access-n8ggm\") pod \"csi-node-driver-q2fnk\" (UID: \"f7dd644a-3303-4570-9359-66c16da8794d\") " pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:54:47.590119 kubelet[2292]: E0813 00:54:47.589899 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.590119 kubelet[2292]: W0813 00:54:47.589912 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.590119 kubelet[2292]: E0813 00:54:47.589929 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.590119 kubelet[2292]: I0813 00:54:47.589950 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f7dd644a-3303-4570-9359-66c16da8794d-varrun\") pod \"csi-node-driver-q2fnk\" (UID: \"f7dd644a-3303-4570-9359-66c16da8794d\") " pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:54:47.590247 kubelet[2292]: E0813 00:54:47.590147 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.590247 kubelet[2292]: W0813 00:54:47.590156 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.590247 kubelet[2292]: E0813 00:54:47.590218 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.590358 kubelet[2292]: E0813 00:54:47.590320 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.590358 kubelet[2292]: W0813 00:54:47.590326 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.590420 kubelet[2292]: E0813 00:54:47.590392 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.590499 kubelet[2292]: E0813 00:54:47.590481 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.590499 kubelet[2292]: W0813 00:54:47.590496 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.590621 kubelet[2292]: E0813 00:54:47.590576 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.591330 kubelet[2292]: E0813 00:54:47.590859 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.591330 kubelet[2292]: W0813 00:54:47.590874 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.591330 kubelet[2292]: E0813 00:54:47.590945 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.591330 kubelet[2292]: E0813 00:54:47.591008 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.591330 kubelet[2292]: W0813 00:54:47.591016 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.591330 kubelet[2292]: E0813 00:54:47.591090 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.591330 kubelet[2292]: E0813 00:54:47.591188 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.591330 kubelet[2292]: W0813 00:54:47.591195 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.591330 kubelet[2292]: E0813 00:54:47.591205 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.591995 kubelet[2292]: E0813 00:54:47.591779 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.591995 kubelet[2292]: W0813 00:54:47.591790 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.591995 kubelet[2292]: E0813 00:54:47.591809 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.591995 kubelet[2292]: E0813 00:54:47.591962 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.591995 kubelet[2292]: W0813 00:54:47.591970 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.591995 kubelet[2292]: E0813 00:54:47.591983 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.592206 kubelet[2292]: E0813 00:54:47.592107 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.592206 kubelet[2292]: W0813 00:54:47.592114 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.592206 kubelet[2292]: E0813 00:54:47.592122 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.592385 kubelet[2292]: E0813 00:54:47.592244 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.592385 kubelet[2292]: W0813 00:54:47.592251 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.592385 kubelet[2292]: E0813 00:54:47.592262 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.628871 env[1359]: time="2025-08-13T00:54:47.628840261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5tj8c,Uid:34e366d1-86a6-4e03-b620-96eef9460d77,Namespace:calico-system,Attempt:0,} returns sandbox id \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\"" Aug 13 00:54:47.691177 kubelet[2292]: E0813 00:54:47.691118 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.691286 kubelet[2292]: W0813 00:54:47.691275 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.691344 kubelet[2292]: E0813 00:54:47.691335 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.692905 kubelet[2292]: E0813 00:54:47.692892 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.692971 kubelet[2292]: W0813 00:54:47.692961 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.693024 kubelet[2292]: E0813 00:54:47.693016 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.693189 kubelet[2292]: E0813 00:54:47.693182 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.693242 kubelet[2292]: W0813 00:54:47.693233 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.693293 kubelet[2292]: E0813 00:54:47.693285 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.693463 kubelet[2292]: E0813 00:54:47.693451 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.693524 kubelet[2292]: W0813 00:54:47.693511 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.693583 kubelet[2292]: E0813 00:54:47.693575 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.693763 kubelet[2292]: E0813 00:54:47.693751 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.693825 kubelet[2292]: W0813 00:54:47.693816 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.693878 kubelet[2292]: E0813 00:54:47.693870 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.694028 kubelet[2292]: E0813 00:54:47.694021 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.694077 kubelet[2292]: W0813 00:54:47.694067 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.694163 kubelet[2292]: E0813 00:54:47.694155 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.694263 kubelet[2292]: E0813 00:54:47.694257 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.694314 kubelet[2292]: W0813 00:54:47.694306 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.694384 kubelet[2292]: E0813 00:54:47.694369 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.694514 kubelet[2292]: E0813 00:54:47.694508 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.694575 kubelet[2292]: W0813 00:54:47.694554 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.694632 kubelet[2292]: E0813 00:54:47.694624 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.694774 kubelet[2292]: E0813 00:54:47.694762 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.694816 kubelet[2292]: W0813 00:54:47.694777 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.694816 kubelet[2292]: E0813 00:54:47.694788 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.694940 kubelet[2292]: E0813 00:54:47.694931 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.694940 kubelet[2292]: W0813 00:54:47.694938 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.695017 kubelet[2292]: E0813 00:54:47.695009 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.695124 kubelet[2292]: E0813 00:54:47.695112 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.695124 kubelet[2292]: W0813 00:54:47.695119 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.695217 kubelet[2292]: E0813 00:54:47.695205 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.695363 kubelet[2292]: E0813 00:54:47.695348 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.695363 kubelet[2292]: W0813 00:54:47.695360 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.695429 kubelet[2292]: E0813 00:54:47.695374 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.695527 kubelet[2292]: E0813 00:54:47.695518 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.695527 kubelet[2292]: W0813 00:54:47.695525 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.695627 kubelet[2292]: E0813 00:54:47.695618 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.695745 kubelet[2292]: E0813 00:54:47.695735 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.695745 kubelet[2292]: W0813 00:54:47.695743 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.695818 kubelet[2292]: E0813 00:54:47.695810 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.695926 kubelet[2292]: E0813 00:54:47.695916 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.695926 kubelet[2292]: W0813 00:54:47.695923 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.696003 kubelet[2292]: E0813 00:54:47.695995 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.696107 kubelet[2292]: E0813 00:54:47.696098 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.696107 kubelet[2292]: W0813 00:54:47.696105 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.696170 kubelet[2292]: E0813 00:54:47.696113 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.696280 kubelet[2292]: E0813 00:54:47.696269 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.696280 kubelet[2292]: W0813 00:54:47.696277 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.696348 kubelet[2292]: E0813 00:54:47.696285 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.696448 kubelet[2292]: E0813 00:54:47.696438 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.696448 kubelet[2292]: W0813 00:54:47.696446 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.696508 kubelet[2292]: E0813 00:54:47.696455 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.696659 kubelet[2292]: E0813 00:54:47.696627 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.696659 kubelet[2292]: W0813 00:54:47.696657 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.696665 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.696858 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.714450 kubelet[2292]: W0813 00:54:47.696863 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.696876 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.697643 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.714450 kubelet[2292]: W0813 00:54:47.697649 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.697706 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.697840 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.714450 kubelet[2292]: W0813 00:54:47.697848 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714450 kubelet[2292]: E0813 00:54:47.697898 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.714777 kubelet[2292]: E0813 00:54:47.697953 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.714777 kubelet[2292]: W0813 00:54:47.697958 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714777 kubelet[2292]: E0813 00:54:47.697966 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.714777 kubelet[2292]: E0813 00:54:47.698118 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.714777 kubelet[2292]: W0813 00:54:47.698123 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714777 kubelet[2292]: E0813 00:54:47.698129 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.714914 kubelet[2292]: E0813 00:54:47.714879 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.714914 kubelet[2292]: W0813 00:54:47.714890 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.714914 kubelet[2292]: E0813 00:54:47.714903 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.749102 kubelet[2292]: E0813 00:54:47.749081 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:47.749243 kubelet[2292]: W0813 00:54:47.749230 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:47.749303 kubelet[2292]: E0813 00:54:47.749293 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:47.858000 audit[2807]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:47.858000 audit[2807]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8e8a95b0 a2=0 a3=7fff8e8a959c items=0 ppid=2438 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:47.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:47.864000 audit[2807]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:47.864000 audit[2807]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8e8a95b0 a2=0 a3=0 items=0 ppid=2438 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:47.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:48.188747 systemd[1]: run-containerd-runc-k8s.io-dbc129efe86b31dc26165f9482a8f975cad592fdfd9c0128fe69d75908fa1e77-runc.tBZ8JS.mount: Deactivated successfully. Aug 13 00:54:49.512235 kubelet[2292]: E0813 00:54:49.511807 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:54:49.708413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102726904.mount: Deactivated successfully. Aug 13 00:54:50.537679 env[1359]: time="2025-08-13T00:54:50.537640101Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.549577 env[1359]: time="2025-08-13T00:54:50.549537530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:51.053191 env[1359]: time="2025-08-13T00:54:51.053161330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:51.063501 env[1359]: time="2025-08-13T00:54:51.063469843Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:51.063796 env[1359]: time="2025-08-13T00:54:51.063776766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:54:51.070820 env[1359]: time="2025-08-13T00:54:51.070793032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:54:51.076205 env[1359]: time="2025-08-13T00:54:51.076181031Z" level=info msg="CreateContainer within sandbox \"dbc129efe86b31dc26165f9482a8f975cad592fdfd9c0128fe69d75908fa1e77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:54:51.139045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511140023.mount: Deactivated successfully. Aug 13 00:54:51.203498 env[1359]: time="2025-08-13T00:54:51.203463494Z" level=info msg="CreateContainer within sandbox \"dbc129efe86b31dc26165f9482a8f975cad592fdfd9c0128fe69d75908fa1e77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e44c5aa4763d7d80fa246ac0f6cad4484045b89c6c637a5dbbbdba2f334d71f3\"" Aug 13 00:54:51.204907 env[1359]: time="2025-08-13T00:54:51.204885496Z" level=info msg="StartContainer for \"e44c5aa4763d7d80fa246ac0f6cad4484045b89c6c637a5dbbbdba2f334d71f3\"" Aug 13 00:54:51.279168 env[1359]: time="2025-08-13T00:54:51.279134344Z" level=info msg="StartContainer for \"e44c5aa4763d7d80fa246ac0f6cad4484045b89c6c637a5dbbbdba2f334d71f3\" returns successfully" Aug 13 00:54:51.511206 kubelet[2292]: E0813 00:54:51.511004 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:54:51.818481 kubelet[2292]: E0813 00:54:51.818413 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.818607 kubelet[2292]: W0813 00:54:51.818594 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.818679 kubelet[2292]: E0813 00:54:51.818669 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.818866 kubelet[2292]: E0813 00:54:51.818859 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.818922 kubelet[2292]: W0813 00:54:51.818914 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.818975 kubelet[2292]: E0813 00:54:51.818967 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.819125 kubelet[2292]: E0813 00:54:51.819118 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.819178 kubelet[2292]: W0813 00:54:51.819164 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.819229 kubelet[2292]: E0813 00:54:51.819221 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819431 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828624 kubelet[2292]: W0813 00:54:51.819437 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819442 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819542 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828624 kubelet[2292]: W0813 00:54:51.819547 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819552 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819658 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828624 kubelet[2292]: W0813 00:54:51.819663 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819668 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828624 kubelet[2292]: E0813 00:54:51.819760 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828857 kubelet[2292]: W0813 00:54:51.819765 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.828857 kubelet[2292]: E0813 00:54:51.819769 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828857 kubelet[2292]: E0813 00:54:51.819884 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828857 kubelet[2292]: W0813 00:54:51.819889 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.828857 kubelet[2292]: E0813 00:54:51.819894 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828857 kubelet[2292]: E0813 00:54:51.819988 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828857 kubelet[2292]: W0813 00:54:51.819994 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.828857 kubelet[2292]: E0813 00:54:51.819999 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.828857 kubelet[2292]: E0813 00:54:51.820084 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.828857 kubelet[2292]: W0813 00:54:51.820089 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820094 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820182 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.838768 kubelet[2292]: W0813 00:54:51.820197 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820202 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820285 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.838768 kubelet[2292]: W0813 00:54:51.820290 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820294 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820383 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.838768 kubelet[2292]: W0813 00:54:51.820388 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.838768 kubelet[2292]: E0813 00:54:51.820393 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820504 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839040 kubelet[2292]: W0813 00:54:51.820510 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820516 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820614 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839040 kubelet[2292]: W0813 00:54:51.820618 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820623 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820812 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839040 kubelet[2292]: W0813 00:54:51.820817 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820822 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839040 kubelet[2292]: E0813 00:54:51.820942 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839290 kubelet[2292]: W0813 00:54:51.820947 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839290 kubelet[2292]: E0813 00:54:51.820954 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839290 kubelet[2292]: E0813 00:54:51.821070 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839290 kubelet[2292]: W0813 00:54:51.821074 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839290 kubelet[2292]: E0813 00:54:51.821081 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839290 kubelet[2292]: E0813 00:54:51.821200 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839290 kubelet[2292]: W0813 00:54:51.821204 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839290 kubelet[2292]: E0813 00:54:51.821211 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839290 kubelet[2292]: E0813 00:54:51.821301 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839290 kubelet[2292]: W0813 00:54:51.821318 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821329 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821421 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839504 kubelet[2292]: W0813 00:54:51.821426 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821433 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821533 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839504 kubelet[2292]: W0813 00:54:51.821544 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821551 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821768 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839504 kubelet[2292]: W0813 00:54:51.821773 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839504 kubelet[2292]: E0813 00:54:51.821782 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.821885 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839716 kubelet[2292]: W0813 00:54:51.821890 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.821897 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.821987 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839716 kubelet[2292]: W0813 00:54:51.821998 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.822006 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.822094 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839716 kubelet[2292]: W0813 00:54:51.822098 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.822116 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839716 kubelet[2292]: E0813 00:54:51.822237 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839896 kubelet[2292]: W0813 00:54:51.822241 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839896 kubelet[2292]: E0813 00:54:51.822249 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839896 kubelet[2292]: E0813 00:54:51.822324 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839896 kubelet[2292]: W0813 00:54:51.822330 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839896 kubelet[2292]: E0813 00:54:51.822335 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839896 kubelet[2292]: E0813 00:54:51.822411 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839896 kubelet[2292]: W0813 00:54:51.822415 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.839896 kubelet[2292]: E0813 00:54:51.822420 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.839896 kubelet[2292]: E0813 00:54:51.822506 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.839896 kubelet[2292]: W0813 00:54:51.822511 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.822516 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.822693 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.840090 kubelet[2292]: W0813 00:54:51.822700 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.822711 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.823050 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.840090 kubelet[2292]: W0813 00:54:51.823056 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.823062 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.823143 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:51.840090 kubelet[2292]: W0813 00:54:51.823148 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:51.840090 kubelet[2292]: E0813 00:54:51.823153 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.760038 kubelet[2292]: I0813 00:54:52.759996 2292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:54:52.827218 kubelet[2292]: E0813 00:54:52.827177 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.827218 kubelet[2292]: W0813 00:54:52.827201 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.827218 kubelet[2292]: E0813 00:54:52.827221 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.827472 kubelet[2292]: E0813 00:54:52.827366 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.827472 kubelet[2292]: W0813 00:54:52.827371 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.827472 kubelet[2292]: E0813 00:54:52.827377 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.827647 kubelet[2292]: E0813 00:54:52.827478 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.827647 kubelet[2292]: W0813 00:54:52.827485 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.827647 kubelet[2292]: E0813 00:54:52.827492 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.827647 kubelet[2292]: E0813 00:54:52.827633 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.827647 kubelet[2292]: W0813 00:54:52.827640 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.827863 kubelet[2292]: E0813 00:54:52.827648 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.827863 kubelet[2292]: E0813 00:54:52.827762 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.827863 kubelet[2292]: W0813 00:54:52.827769 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.827863 kubelet[2292]: E0813 00:54:52.827776 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828031 kubelet[2292]: E0813 00:54:52.827873 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828031 kubelet[2292]: W0813 00:54:52.827879 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828031 kubelet[2292]: E0813 00:54:52.827887 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828031 kubelet[2292]: E0813 00:54:52.827976 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828031 kubelet[2292]: W0813 00:54:52.827982 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828031 kubelet[2292]: E0813 00:54:52.827989 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828280 kubelet[2292]: E0813 00:54:52.828087 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828280 kubelet[2292]: W0813 00:54:52.828094 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828280 kubelet[2292]: E0813 00:54:52.828103 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828280 kubelet[2292]: E0813 00:54:52.828205 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828280 kubelet[2292]: W0813 00:54:52.828212 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828280 kubelet[2292]: E0813 00:54:52.828218 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828310 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828623 kubelet[2292]: W0813 00:54:52.828317 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828324 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828405 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828623 kubelet[2292]: W0813 00:54:52.828409 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828414 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828504 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.828623 kubelet[2292]: W0813 00:54:52.828510 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828515 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.828623 kubelet[2292]: E0813 00:54:52.828613 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829086 kubelet[2292]: W0813 00:54:52.828620 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829086 kubelet[2292]: E0813 00:54:52.828628 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829086 kubelet[2292]: E0813 00:54:52.828718 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829086 kubelet[2292]: W0813 00:54:52.828724 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829086 kubelet[2292]: E0813 00:54:52.828730 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829086 kubelet[2292]: E0813 00:54:52.828816 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829086 kubelet[2292]: W0813 00:54:52.828822 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829086 kubelet[2292]: E0813 00:54:52.828827 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829086 kubelet[2292]: E0813 00:54:52.828967 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829086 kubelet[2292]: W0813 00:54:52.828972 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.828978 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.829091 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829512 kubelet[2292]: W0813 00:54:52.829097 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.829105 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.829209 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829512 kubelet[2292]: W0813 00:54:52.829215 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.829222 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.829338 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829512 kubelet[2292]: W0813 00:54:52.829345 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829512 kubelet[2292]: E0813 00:54:52.829353 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829883 kubelet[2292]: E0813 00:54:52.829446 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829883 kubelet[2292]: W0813 00:54:52.829452 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829883 kubelet[2292]: E0813 00:54:52.829459 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829883 kubelet[2292]: E0813 00:54:52.829556 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829883 kubelet[2292]: W0813 00:54:52.829572 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829883 kubelet[2292]: E0813 00:54:52.829595 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.829883 kubelet[2292]: E0813 00:54:52.829706 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.829883 kubelet[2292]: W0813 00:54:52.829711 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.829883 kubelet[2292]: E0813 00:54:52.829718 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.830187 kubelet[2292]: E0813 00:54:52.829911 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.830187 kubelet[2292]: W0813 00:54:52.829916 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.830187 kubelet[2292]: E0813 00:54:52.829922 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.830187 kubelet[2292]: E0813 00:54:52.830028 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.830187 kubelet[2292]: W0813 00:54:52.830033 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.830187 kubelet[2292]: E0813 00:54:52.830038 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.830187 kubelet[2292]: E0813 00:54:52.830133 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.830187 kubelet[2292]: W0813 00:54:52.830139 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.830187 kubelet[2292]: E0813 00:54:52.830148 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.830469 kubelet[2292]: E0813 00:54:52.830236 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.830469 kubelet[2292]: W0813 00:54:52.830242 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.830469 kubelet[2292]: E0813 00:54:52.830248 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.830469 kubelet[2292]: E0813 00:54:52.830346 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.830469 kubelet[2292]: W0813 00:54:52.830352 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.830469 kubelet[2292]: E0813 00:54:52.830359 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.830609 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.833319 kubelet[2292]: W0813 00:54:52.830615 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.830620 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.830825 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.833319 kubelet[2292]: W0813 00:54:52.830835 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.830844 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.831043 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.833319 kubelet[2292]: W0813 00:54:52.831049 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.831067 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.833319 kubelet[2292]: E0813 00:54:52.831215 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.833681 kubelet[2292]: W0813 00:54:52.831222 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.833681 kubelet[2292]: E0813 00:54:52.831231 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.833681 kubelet[2292]: E0813 00:54:52.831483 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.833681 kubelet[2292]: W0813 00:54:52.831489 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.833681 kubelet[2292]: E0813 00:54:52.831503 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:52.833681 kubelet[2292]: E0813 00:54:52.832300 2292 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:54:52.833681 kubelet[2292]: W0813 00:54:52.832317 2292 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:54:52.833681 kubelet[2292]: E0813 00:54:52.832329 2292 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:54:53.391713 env[1359]: time="2025-08-13T00:54:53.391676152Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:53.410089 env[1359]: time="2025-08-13T00:54:53.410055559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:53.421287 env[1359]: time="2025-08-13T00:54:53.421256450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:53.434495 env[1359]: time="2025-08-13T00:54:53.434461716Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:53.434939 env[1359]: time="2025-08-13T00:54:53.434919379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:54:53.437501 env[1359]: time="2025-08-13T00:54:53.437475217Z" level=info msg="CreateContainer within sandbox \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:54:53.487666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761424964.mount: Deactivated successfully. Aug 13 00:54:53.492925 env[1359]: time="2025-08-13T00:54:53.492825802Z" level=info msg="CreateContainer within sandbox \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000\"" Aug 13 00:54:53.497593 env[1359]: time="2025-08-13T00:54:53.493497897Z" level=info msg="StartContainer for \"bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000\"" Aug 13 00:54:53.512574 kubelet[2292]: E0813 00:54:53.511939 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:54:53.527663 systemd[1]: run-containerd-runc-k8s.io-bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000-runc.qvKn8f.mount: Deactivated successfully. Aug 13 00:54:53.582772 env[1359]: time="2025-08-13T00:54:53.582744708Z" level=info msg="StartContainer for \"bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000\" returns successfully" Aug 13 00:54:53.793012 kubelet[2292]: I0813 00:54:53.792974 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f896f87b7-xkhm4" podStartSLOduration=4.160336137 podStartE2EDuration="7.79296215s" podCreationTimestamp="2025-08-13 00:54:46 +0000 UTC" firstStartedPulling="2025-08-13 00:54:47.432148542 +0000 UTC m=+16.088826202" lastFinishedPulling="2025-08-13 00:54:51.064774546 +0000 UTC m=+19.721452215" observedRunningTime="2025-08-13 00:54:51.847248287 +0000 UTC m=+20.503925956" watchObservedRunningTime="2025-08-13 00:54:53.79296215 +0000 UTC m=+22.449639814" Aug 13 00:54:54.435843 env[1359]: time="2025-08-13T00:54:54.435800705Z" level=info msg="shim disconnected" id=bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000 Aug 13 00:54:54.436189 env[1359]: time="2025-08-13T00:54:54.435843962Z" level=warning msg="cleaning up after shim disconnected" id=bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000 namespace=k8s.io Aug 13 00:54:54.436189 env[1359]: time="2025-08-13T00:54:54.435856945Z" level=info msg="cleaning up dead shim" Aug 13 00:54:54.443622 env[1359]: time="2025-08-13T00:54:54.443578681Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2975 runtime=io.containerd.runc.v2\n" Aug 13 00:54:54.483950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb108c1d816a03b135b209721ea46f86b558c9ded3b3278af04cd4b167634000-rootfs.mount: Deactivated successfully. Aug 13 00:54:54.765252 env[1359]: time="2025-08-13T00:54:54.765202741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:54:55.510916 kubelet[2292]: E0813 00:54:55.510863 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:54:57.511123 kubelet[2292]: E0813 00:54:57.510867 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:54:58.251606 env[1359]: time="2025-08-13T00:54:58.251571357Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.279711 env[1359]: time="2025-08-13T00:54:58.279682301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.294011 env[1359]: time="2025-08-13T00:54:58.293988058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.303732 env[1359]: time="2025-08-13T00:54:58.303708406Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:58.304106 env[1359]: time="2025-08-13T00:54:58.304085326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:54:58.305779 env[1359]: time="2025-08-13T00:54:58.305684224Z" level=info msg="CreateContainer within sandbox \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:54:58.335308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933974517.mount: Deactivated successfully. Aug 13 00:54:58.376487 env[1359]: time="2025-08-13T00:54:58.376429885Z" level=info msg="CreateContainer within sandbox \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"960ec820ad91209eedae607145f9af4e6d3ac4abde4bebaf3c796d7241f55bbf\"" Aug 13 00:54:58.377592 env[1359]: time="2025-08-13T00:54:58.377553714Z" level=info msg="StartContainer for \"960ec820ad91209eedae607145f9af4e6d3ac4abde4bebaf3c796d7241f55bbf\"" Aug 13 00:54:58.448315 env[1359]: time="2025-08-13T00:54:58.448281493Z" level=info msg="StartContainer for \"960ec820ad91209eedae607145f9af4e6d3ac4abde4bebaf3c796d7241f55bbf\" returns successfully" Aug 13 00:54:59.511126 kubelet[2292]: E0813 00:54:59.511085 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:55:01.125826 env[1359]: time="2025-08-13T00:55:01.125776563Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:55:01.139285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-960ec820ad91209eedae607145f9af4e6d3ac4abde4bebaf3c796d7241f55bbf-rootfs.mount: Deactivated successfully. Aug 13 00:55:01.169555 env[1359]: time="2025-08-13T00:55:01.169517673Z" level=info msg="shim disconnected" id=960ec820ad91209eedae607145f9af4e6d3ac4abde4bebaf3c796d7241f55bbf Aug 13 00:55:01.169555 env[1359]: time="2025-08-13T00:55:01.169546589Z" level=warning msg="cleaning up after shim disconnected" id=960ec820ad91209eedae607145f9af4e6d3ac4abde4bebaf3c796d7241f55bbf namespace=k8s.io Aug 13 00:55:01.169555 env[1359]: time="2025-08-13T00:55:01.169552851Z" level=info msg="cleaning up dead shim" Aug 13 00:55:01.175169 env[1359]: time="2025-08-13T00:55:01.175147982Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3038 runtime=io.containerd.runc.v2\n" Aug 13 00:55:01.229858 kubelet[2292]: I0813 00:55:01.229830 2292 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:55:01.515552 env[1359]: time="2025-08-13T00:55:01.515294620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2fnk,Uid:f7dd644a-3303-4570-9359-66c16da8794d,Namespace:calico-system,Attempt:0,}" Aug 13 00:55:01.570626 kubelet[2292]: I0813 00:55:01.570602 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8zj\" (UniqueName: \"kubernetes.io/projected/5ec0c637-128e-47db-8784-76084717fd4b-kube-api-access-ps8zj\") pod \"calico-apiserver-7cbf5687db-jtpwn\" (UID: \"5ec0c637-128e-47db-8784-76084717fd4b\") " pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" Aug 13 00:55:01.570767 kubelet[2292]: I0813 00:55:01.570755 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21361a1e-b6fe-46a2-b608-047b2089e93b-tigera-ca-bundle\") pod \"calico-kube-controllers-7bcc59697d-9mb8b\" (UID: \"21361a1e-b6fe-46a2-b608-047b2089e93b\") " pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" Aug 13 00:55:01.570832 kubelet[2292]: I0813 00:55:01.570822 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0c33fae7-fe42-4436-aa98-d3a4e41c85e1-goldmane-key-pair\") pod \"goldmane-58fd7646b9-kl8br\" (UID: \"0c33fae7-fe42-4436-aa98-d3a4e41c85e1\") " pod="calico-system/goldmane-58fd7646b9-kl8br" Aug 13 00:55:01.570893 kubelet[2292]: I0813 00:55:01.570883 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24vz\" (UniqueName: \"kubernetes.io/projected/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-kube-api-access-m24vz\") pod \"whisker-969784c8c-swhml\" (UID: \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\") " pod="calico-system/whisker-969784c8c-swhml" Aug 13 00:55:01.570954 kubelet[2292]: I0813 00:55:01.570945 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xht2s\" (UniqueName: \"kubernetes.io/projected/4e691412-289d-4857-95a4-f28ceeef2595-kube-api-access-xht2s\") pod \"coredns-7c65d6cfc9-p64qm\" (UID: \"4e691412-289d-4857-95a4-f28ceeef2595\") " pod="kube-system/coredns-7c65d6cfc9-p64qm" Aug 13 00:55:01.575306 kubelet[2292]: I0813 00:55:01.571038 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xthzk\" (UniqueName: \"kubernetes.io/projected/21361a1e-b6fe-46a2-b608-047b2089e93b-kube-api-access-xthzk\") pod \"calico-kube-controllers-7bcc59697d-9mb8b\" (UID: \"21361a1e-b6fe-46a2-b608-047b2089e93b\") " pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" Aug 13 00:55:01.575306 kubelet[2292]: I0813 00:55:01.571055 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97z9\" (UniqueName: \"kubernetes.io/projected/3a86fd1c-50b7-4722-86dc-208e2b22565d-kube-api-access-n97z9\") pod \"calico-apiserver-7cbf5687db-l5vzj\" (UID: \"3a86fd1c-50b7-4722-86dc-208e2b22565d\") " pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" Aug 13 00:55:01.575306 kubelet[2292]: I0813 00:55:01.571067 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xtd\" (UniqueName: \"kubernetes.io/projected/416a1a84-00d1-44b4-bc44-a5aa913a36f7-kube-api-access-t7xtd\") pod \"coredns-7c65d6cfc9-qx2bj\" (UID: \"416a1a84-00d1-44b4-bc44-a5aa913a36f7\") " pod="kube-system/coredns-7c65d6cfc9-qx2bj" Aug 13 00:55:01.575306 kubelet[2292]: I0813 00:55:01.571079 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c33fae7-fe42-4436-aa98-d3a4e41c85e1-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-kl8br\" (UID: \"0c33fae7-fe42-4436-aa98-d3a4e41c85e1\") " pod="calico-system/goldmane-58fd7646b9-kl8br" Aug 13 00:55:01.575306 kubelet[2292]: I0813 00:55:01.571090 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vh7b\" (UniqueName: \"kubernetes.io/projected/0c33fae7-fe42-4436-aa98-d3a4e41c85e1-kube-api-access-8vh7b\") pod \"goldmane-58fd7646b9-kl8br\" (UID: \"0c33fae7-fe42-4436-aa98-d3a4e41c85e1\") " pod="calico-system/goldmane-58fd7646b9-kl8br" Aug 13 00:55:01.575460 kubelet[2292]: I0813 00:55:01.571104 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ec0c637-128e-47db-8784-76084717fd4b-calico-apiserver-certs\") pod \"calico-apiserver-7cbf5687db-jtpwn\" (UID: \"5ec0c637-128e-47db-8784-76084717fd4b\") " pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" Aug 13 00:55:01.575460 kubelet[2292]: I0813 00:55:01.571119 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/416a1a84-00d1-44b4-bc44-a5aa913a36f7-config-volume\") pod \"coredns-7c65d6cfc9-qx2bj\" (UID: \"416a1a84-00d1-44b4-bc44-a5aa913a36f7\") " pod="kube-system/coredns-7c65d6cfc9-qx2bj" Aug 13 00:55:01.575460 kubelet[2292]: I0813 00:55:01.571130 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndnxx\" (UniqueName: \"kubernetes.io/projected/d3b3d048-dd23-43c6-a58f-53a36c816495-kube-api-access-ndnxx\") pod \"calico-apiserver-6d77c8d6dd-bgt6p\" (UID: \"d3b3d048-dd23-43c6-a58f-53a36c816495\") " pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" Aug 13 00:55:01.575460 kubelet[2292]: I0813 00:55:01.571140 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c33fae7-fe42-4436-aa98-d3a4e41c85e1-config\") pod \"goldmane-58fd7646b9-kl8br\" (UID: \"0c33fae7-fe42-4436-aa98-d3a4e41c85e1\") " pod="calico-system/goldmane-58fd7646b9-kl8br" Aug 13 00:55:01.575460 kubelet[2292]: I0813 00:55:01.571151 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a86fd1c-50b7-4722-86dc-208e2b22565d-calico-apiserver-certs\") pod \"calico-apiserver-7cbf5687db-l5vzj\" (UID: \"3a86fd1c-50b7-4722-86dc-208e2b22565d\") " pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" Aug 13 00:55:01.580991 kubelet[2292]: I0813 00:55:01.571160 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e691412-289d-4857-95a4-f28ceeef2595-config-volume\") pod \"coredns-7c65d6cfc9-p64qm\" (UID: \"4e691412-289d-4857-95a4-f28ceeef2595\") " pod="kube-system/coredns-7c65d6cfc9-p64qm" Aug 13 00:55:01.580991 kubelet[2292]: I0813 00:55:01.571174 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3b3d048-dd23-43c6-a58f-53a36c816495-calico-apiserver-certs\") pod \"calico-apiserver-6d77c8d6dd-bgt6p\" (UID: \"d3b3d048-dd23-43c6-a58f-53a36c816495\") " pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" Aug 13 00:55:01.580991 kubelet[2292]: I0813 00:55:01.571186 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-backend-key-pair\") pod \"whisker-969784c8c-swhml\" (UID: \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\") " pod="calico-system/whisker-969784c8c-swhml" Aug 13 00:55:01.580991 kubelet[2292]: I0813 00:55:01.571196 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-ca-bundle\") pod \"whisker-969784c8c-swhml\" (UID: \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\") " pod="calico-system/whisker-969784c8c-swhml" Aug 13 00:55:01.696347 env[1359]: time="2025-08-13T00:55:01.696288283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-jtpwn,Uid:5ec0c637-128e-47db-8784-76084717fd4b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:55:01.905253 env[1359]: time="2025-08-13T00:55:01.904765156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:55:01.953590 env[1359]: time="2025-08-13T00:55:01.953548349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qx2bj,Uid:416a1a84-00d1-44b4-bc44-a5aa913a36f7,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:01.953822 env[1359]: time="2025-08-13T00:55:01.953803303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-969784c8c-swhml,Uid:913dce23-f3a0-4e7d-9b8e-f702cec1a3d6,Namespace:calico-system,Attempt:0,}" Aug 13 00:55:01.965932 env[1359]: time="2025-08-13T00:55:01.965905916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77c8d6dd-bgt6p,Uid:d3b3d048-dd23-43c6-a58f-53a36c816495,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:55:01.983529 env[1359]: time="2025-08-13T00:55:01.983501436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl8br,Uid:0c33fae7-fe42-4436-aa98-d3a4e41c85e1,Namespace:calico-system,Attempt:0,}" Aug 13 00:55:01.983856 env[1359]: time="2025-08-13T00:55:01.983842655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-l5vzj,Uid:3a86fd1c-50b7-4722-86dc-208e2b22565d,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:55:01.993667 env[1359]: time="2025-08-13T00:55:01.984181687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcc59697d-9mb8b,Uid:21361a1e-b6fe-46a2-b608-047b2089e93b,Namespace:calico-system,Attempt:0,}" Aug 13 00:55:01.998003 env[1359]: time="2025-08-13T00:55:01.997979392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p64qm,Uid:4e691412-289d-4857-95a4-f28ceeef2595,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:02.556786 env[1359]: time="2025-08-13T00:55:02.556733132Z" level=error msg="Failed to destroy network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.558736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433-shm.mount: Deactivated successfully. Aug 13 00:55:02.559442 env[1359]: time="2025-08-13T00:55:02.559407920Z" level=error msg="encountered an error cleaning up failed sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.559548 env[1359]: time="2025-08-13T00:55:02.559526381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcc59697d-9mb8b,Uid:21361a1e-b6fe-46a2-b608-047b2089e93b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.583696 kubelet[2292]: E0813 00:55:02.583653 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.600110 env[1359]: time="2025-08-13T00:55:02.600073149Z" level=error msg="Failed to destroy network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.603025514Z" level=error msg="encountered an error cleaning up failed sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.603077540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2fnk,Uid:f7dd644a-3303-4570-9359-66c16da8794d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.603267724Z" level=error msg="Failed to destroy network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.605863413Z" level=error msg="encountered an error cleaning up failed sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.605901866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p64qm,Uid:4e691412-289d-4857-95a4-f28ceeef2595,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.605954252Z" level=error msg="Failed to destroy network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.606141769Z" level=error msg="encountered an error cleaning up failed sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.606161807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qx2bj,Uid:416a1a84-00d1-44b4-bc44-a5aa913a36f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.606204205Z" level=error msg="Failed to destroy network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.606370639Z" level=error msg="encountered an error cleaning up failed sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.609206 env[1359]: time="2025-08-13T00:55:02.606388556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-l5vzj,Uid:3a86fd1c-50b7-4722-86dc-208e2b22565d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.627913 kubelet[2292]: E0813 00:55:02.606544 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.627913 kubelet[2292]: E0813 00:55:02.613624 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" Aug 13 00:55:02.627913 kubelet[2292]: E0813 00:55:02.613689 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" Aug 13 00:55:02.627913 kubelet[2292]: E0813 00:55:02.619000 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" Aug 13 00:55:02.601810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755-shm.mount: Deactivated successfully. Aug 13 00:55:02.636487 kubelet[2292]: E0813 00:55:02.619070 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cbf5687db-l5vzj_calico-apiserver(3a86fd1c-50b7-4722-86dc-208e2b22565d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cbf5687db-l5vzj_calico-apiserver(3a86fd1c-50b7-4722-86dc-208e2b22565d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" podUID="3a86fd1c-50b7-4722-86dc-208e2b22565d" Aug 13 00:55:02.636487 kubelet[2292]: E0813 00:55:02.619255 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.636487 kubelet[2292]: E0813 00:55:02.619288 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" Aug 13 00:55:02.636766 env[1359]: time="2025-08-13T00:55:02.631437489Z" level=error msg="Failed to destroy network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.636766 env[1359]: time="2025-08-13T00:55:02.631722882Z" level=error msg="encountered an error cleaning up failed sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.636766 env[1359]: time="2025-08-13T00:55:02.631760477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-969784c8c-swhml,Uid:913dce23-f3a0-4e7d-9b8e-f702cec1a3d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.636766 env[1359]: time="2025-08-13T00:55:02.632197823Z" level=error msg="Failed to destroy network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.636766 env[1359]: time="2025-08-13T00:55:02.632407569Z" level=error msg="encountered an error cleaning up failed sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.636766 env[1359]: time="2025-08-13T00:55:02.632458533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77c8d6dd-bgt6p,Uid:d3b3d048-dd23-43c6-a58f-53a36c816495,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.604875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131-shm.mount: Deactivated successfully. Aug 13 00:55:02.637146 kubelet[2292]: E0813 00:55:02.620122 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bcc59697d-9mb8b_calico-system(21361a1e-b6fe-46a2-b608-047b2089e93b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bcc59697d-9mb8b_calico-system(21361a1e-b6fe-46a2-b608-047b2089e93b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" podUID="21361a1e-b6fe-46a2-b608-047b2089e93b" Aug 13 00:55:02.637146 kubelet[2292]: E0813 00:55:02.620176 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.637146 kubelet[2292]: E0813 00:55:02.620199 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p64qm" Aug 13 00:55:02.642822 kubelet[2292]: E0813 00:55:02.620210 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p64qm" Aug 13 00:55:02.642822 kubelet[2292]: E0813 00:55:02.620231 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p64qm_kube-system(4e691412-289d-4857-95a4-f28ceeef2595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p64qm_kube-system(4e691412-289d-4857-95a4-f28ceeef2595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p64qm" podUID="4e691412-289d-4857-95a4-f28ceeef2595" Aug 13 00:55:02.642822 kubelet[2292]: E0813 00:55:02.620334 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.642944 kubelet[2292]: E0813 00:55:02.620348 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qx2bj" Aug 13 00:55:02.642944 kubelet[2292]: E0813 00:55:02.620356 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qx2bj" Aug 13 00:55:02.642944 kubelet[2292]: E0813 00:55:02.620372 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qx2bj_kube-system(416a1a84-00d1-44b4-bc44-a5aa913a36f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qx2bj_kube-system(416a1a84-00d1-44b4-bc44-a5aa913a36f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qx2bj" podUID="416a1a84-00d1-44b4-bc44-a5aa913a36f7" Aug 13 00:55:02.657801 kubelet[2292]: E0813 00:55:02.620397 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:55:02.657801 kubelet[2292]: E0813 00:55:02.620409 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q2fnk" Aug 13 00:55:02.657801 kubelet[2292]: E0813 00:55:02.620435 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q2fnk_calico-system(f7dd644a-3303-4570-9359-66c16da8794d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q2fnk_calico-system(f7dd644a-3303-4570-9359-66c16da8794d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:55:02.657919 env[1359]: time="2025-08-13T00:55:02.645095463Z" level=error msg="Failed to destroy network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.657919 env[1359]: time="2025-08-13T00:55:02.645384219Z" level=error msg="encountered an error cleaning up failed sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.657919 env[1359]: time="2025-08-13T00:55:02.645414638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-jtpwn,Uid:5ec0c637-128e-47db-8784-76084717fd4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.657919 env[1359]: time="2025-08-13T00:55:02.653471157Z" level=error msg="Failed to destroy network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.657919 env[1359]: time="2025-08-13T00:55:02.653756182Z" level=error msg="encountered an error cleaning up failed sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.657919 env[1359]: time="2025-08-13T00:55:02.653806384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl8br,Uid:0c33fae7-fe42-4436-aa98-d3a4e41c85e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.658071 kubelet[2292]: E0813 00:55:02.632033 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.658071 kubelet[2292]: E0813 00:55:02.632090 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-969784c8c-swhml" Aug 13 00:55:02.658071 kubelet[2292]: E0813 00:55:02.632103 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-969784c8c-swhml" Aug 13 00:55:02.658137 kubelet[2292]: E0813 00:55:02.632146 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-969784c8c-swhml_calico-system(913dce23-f3a0-4e7d-9b8e-f702cec1a3d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-969784c8c-swhml_calico-system(913dce23-f3a0-4e7d-9b8e-f702cec1a3d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-969784c8c-swhml" podUID="913dce23-f3a0-4e7d-9b8e-f702cec1a3d6" Aug 13 00:55:02.658137 kubelet[2292]: E0813 00:55:02.632544 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.658137 kubelet[2292]: E0813 00:55:02.632584 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" Aug 13 00:55:02.658214 kubelet[2292]: E0813 00:55:02.632595 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" Aug 13 00:55:02.658214 kubelet[2292]: E0813 00:55:02.632613 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d77c8d6dd-bgt6p_calico-apiserver(d3b3d048-dd23-43c6-a58f-53a36c816495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d77c8d6dd-bgt6p_calico-apiserver(d3b3d048-dd23-43c6-a58f-53a36c816495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" podUID="d3b3d048-dd23-43c6-a58f-53a36c816495" Aug 13 00:55:02.658214 kubelet[2292]: E0813 00:55:02.645617 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.658298 kubelet[2292]: E0813 00:55:02.645662 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" Aug 13 00:55:02.658298 kubelet[2292]: E0813 00:55:02.645681 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" Aug 13 00:55:02.658298 kubelet[2292]: E0813 00:55:02.645707 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cbf5687db-jtpwn_calico-apiserver(5ec0c637-128e-47db-8784-76084717fd4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cbf5687db-jtpwn_calico-apiserver(5ec0c637-128e-47db-8784-76084717fd4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" podUID="5ec0c637-128e-47db-8784-76084717fd4b" Aug 13 00:55:02.658400 kubelet[2292]: E0813 00:55:02.654265 2292 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.658400 kubelet[2292]: E0813 00:55:02.654299 2292 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kl8br" Aug 13 00:55:02.658400 kubelet[2292]: E0813 00:55:02.654312 2292 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-kl8br" Aug 13 00:55:02.658463 kubelet[2292]: E0813 00:55:02.654349 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-kl8br_calico-system(0c33fae7-fe42-4436-aa98-d3a4e41c85e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-kl8br_calico-system(0c33fae7-fe42-4436-aa98-d3a4e41c85e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kl8br" podUID="0c33fae7-fe42-4436-aa98-d3a4e41c85e1" Aug 13 00:55:02.877237 kubelet[2292]: I0813 00:55:02.875774 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Aug 13 00:55:02.877237 kubelet[2292]: I0813 00:55:02.876966 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:55:02.877366 env[1359]: time="2025-08-13T00:55:02.876283139Z" level=info msg="StopPodSandbox for \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\"" Aug 13 00:55:02.877366 env[1359]: time="2025-08-13T00:55:02.877253107Z" level=info msg="StopPodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\"" Aug 13 00:55:02.878477 kubelet[2292]: I0813 00:55:02.877770 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Aug 13 00:55:02.878537 env[1359]: time="2025-08-13T00:55:02.878027779Z" level=info msg="StopPodSandbox for \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\"" Aug 13 00:55:02.878589 kubelet[2292]: I0813 00:55:02.878580 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Aug 13 00:55:02.878958 env[1359]: time="2025-08-13T00:55:02.878935051Z" level=info msg="StopPodSandbox for \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\"" Aug 13 00:55:02.880281 kubelet[2292]: I0813 00:55:02.880263 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:55:02.880665 env[1359]: time="2025-08-13T00:55:02.880648847Z" level=info msg="StopPodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\"" Aug 13 00:55:02.881674 kubelet[2292]: I0813 00:55:02.881662 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:55:02.882015 env[1359]: time="2025-08-13T00:55:02.881995484Z" level=info msg="StopPodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\"" Aug 13 00:55:02.882800 kubelet[2292]: I0813 00:55:02.882690 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Aug 13 00:55:02.883047 env[1359]: time="2025-08-13T00:55:02.883027013Z" level=info msg="StopPodSandbox for \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\"" Aug 13 00:55:02.883577 kubelet[2292]: I0813 00:55:02.883552 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:02.883970 env[1359]: time="2025-08-13T00:55:02.883955964Z" level=info msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" Aug 13 00:55:02.884811 kubelet[2292]: I0813 00:55:02.884799 2292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:55:02.892038 env[1359]: time="2025-08-13T00:55:02.885128918Z" level=info msg="StopPodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\"" Aug 13 00:55:02.919857 env[1359]: time="2025-08-13T00:55:02.919808404Z" level=error msg="StopPodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" failed" error="failed to destroy network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.920224 kubelet[2292]: E0813 00:55:02.920118 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:55:02.920224 kubelet[2292]: E0813 00:55:02.920155 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe"} Aug 13 00:55:02.920224 kubelet[2292]: E0813 00:55:02.920192 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a86fd1c-50b7-4722-86dc-208e2b22565d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.920224 kubelet[2292]: E0813 00:55:02.920205 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a86fd1c-50b7-4722-86dc-208e2b22565d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" podUID="3a86fd1c-50b7-4722-86dc-208e2b22565d" Aug 13 00:55:02.930175 env[1359]: time="2025-08-13T00:55:02.930137966Z" level=error msg="StopPodSandbox for \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\" failed" error="failed to destroy network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.930334 kubelet[2292]: E0813 00:55:02.930296 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Aug 13 00:55:02.930393 kubelet[2292]: E0813 00:55:02.930343 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763"} Aug 13 00:55:02.930393 kubelet[2292]: E0813 00:55:02.930364 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3b3d048-dd23-43c6-a58f-53a36c816495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.930467 kubelet[2292]: E0813 00:55:02.930388 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3b3d048-dd23-43c6-a58f-53a36c816495\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" podUID="d3b3d048-dd23-43c6-a58f-53a36c816495" Aug 13 00:55:02.943871 env[1359]: time="2025-08-13T00:55:02.943619658Z" level=error msg="StopPodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" failed" error="failed to destroy network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.944661 kubelet[2292]: E0813 00:55:02.944283 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:55:02.944661 kubelet[2292]: E0813 00:55:02.944320 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131"} Aug 13 00:55:02.944661 kubelet[2292]: E0813 00:55:02.944345 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e691412-289d-4857-95a4-f28ceeef2595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.944661 kubelet[2292]: E0813 00:55:02.944359 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e691412-289d-4857-95a4-f28ceeef2595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p64qm" podUID="4e691412-289d-4857-95a4-f28ceeef2595" Aug 13 00:55:02.959794 env[1359]: time="2025-08-13T00:55:02.959746938Z" level=error msg="StopPodSandbox for \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\" failed" error="failed to destroy network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.960143 kubelet[2292]: E0813 00:55:02.960108 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Aug 13 00:55:02.960232 kubelet[2292]: E0813 00:55:02.960152 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051"} Aug 13 00:55:02.960232 kubelet[2292]: E0813 00:55:02.960175 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"416a1a84-00d1-44b4-bc44-a5aa913a36f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.960232 kubelet[2292]: E0813 00:55:02.960193 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"416a1a84-00d1-44b4-bc44-a5aa913a36f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qx2bj" podUID="416a1a84-00d1-44b4-bc44-a5aa913a36f7" Aug 13 00:55:02.960626 env[1359]: time="2025-08-13T00:55:02.960590066Z" level=error msg="StopPodSandbox for \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\" failed" error="failed to destroy network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.960886 kubelet[2292]: E0813 00:55:02.960762 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Aug 13 00:55:02.960886 kubelet[2292]: E0813 00:55:02.960804 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433"} Aug 13 00:55:02.960886 kubelet[2292]: E0813 00:55:02.960829 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21361a1e-b6fe-46a2-b608-047b2089e93b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.960886 kubelet[2292]: E0813 00:55:02.960853 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21361a1e-b6fe-46a2-b608-047b2089e93b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" podUID="21361a1e-b6fe-46a2-b608-047b2089e93b" Aug 13 00:55:02.979722 env[1359]: time="2025-08-13T00:55:02.979673820Z" level=error msg="StopPodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" failed" error="failed to destroy network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.979853 kubelet[2292]: E0813 00:55:02.979823 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:55:02.979899 kubelet[2292]: E0813 00:55:02.979850 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755"} Aug 13 00:55:02.979899 kubelet[2292]: E0813 00:55:02.979870 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7dd644a-3303-4570-9359-66c16da8794d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.979899 kubelet[2292]: E0813 00:55:02.979885 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7dd644a-3303-4570-9359-66c16da8794d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q2fnk" podUID="f7dd644a-3303-4570-9359-66c16da8794d" Aug 13 00:55:02.982650 env[1359]: time="2025-08-13T00:55:02.982608684Z" level=error msg="StopPodSandbox for \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\" failed" error="failed to destroy network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:02.982800 kubelet[2292]: E0813 00:55:02.982765 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Aug 13 00:55:02.982849 kubelet[2292]: E0813 00:55:02.982800 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae"} Aug 13 00:55:02.982849 kubelet[2292]: E0813 00:55:02.982824 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c33fae7-fe42-4436-aa98-d3a4e41c85e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:02.982849 kubelet[2292]: E0813 00:55:02.982838 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c33fae7-fe42-4436-aa98-d3a4e41c85e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-kl8br" podUID="0c33fae7-fe42-4436-aa98-d3a4e41c85e1" Aug 13 00:55:03.001167 kubelet[2292]: E0813 00:55:02.985954 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:03.001167 kubelet[2292]: E0813 00:55:02.986002 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f"} Aug 13 00:55:03.001167 kubelet[2292]: E0813 00:55:02.986026 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ec0c637-128e-47db-8784-76084717fd4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:03.001167 kubelet[2292]: E0813 00:55:02.986040 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ec0c637-128e-47db-8784-76084717fd4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" podUID="5ec0c637-128e-47db-8784-76084717fd4b" Aug 13 00:55:03.001355 env[1359]: time="2025-08-13T00:55:02.985785575Z" level=error msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" failed" error="failed to destroy network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:03.001355 env[1359]: time="2025-08-13T00:55:02.990525272Z" level=error msg="StopPodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" failed" error="failed to destroy network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:55:03.001410 kubelet[2292]: E0813 00:55:02.990696 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:55:03.001410 kubelet[2292]: E0813 00:55:02.990738 2292 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538"} Aug 13 00:55:03.001410 kubelet[2292]: E0813 00:55:02.990767 2292 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:55:03.001410 kubelet[2292]: E0813 00:55:02.990796 2292 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-969784c8c-swhml" podUID="913dce23-f3a0-4e7d-9b8e-f702cec1a3d6" Aug 13 00:55:03.139581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe-shm.mount: Deactivated successfully. Aug 13 00:55:03.139678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae-shm.mount: Deactivated successfully. Aug 13 00:55:03.139735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763-shm.mount: Deactivated successfully. Aug 13 00:55:03.139788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051-shm.mount: Deactivated successfully. Aug 13 00:55:03.139841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538-shm.mount: Deactivated successfully. Aug 13 00:55:03.139894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f-shm.mount: Deactivated successfully. Aug 13 00:55:05.535457 kubelet[2292]: I0813 00:55:05.535214 2292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:55:06.020000 audit[3446]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:06.025319 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 00:55:06.026206 kernel: audit: type=1325 audit(1755046506.020:297): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:06.026238 kernel: audit: type=1300 audit(1755046506.020:297): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6e170f90 a2=0 a3=7ffc6e170f7c items=0 ppid=2438 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:06.020000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6e170f90 a2=0 a3=7ffc6e170f7c items=0 ppid=2438 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:06.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:06.032591 kernel: audit: type=1327 audit(1755046506.020:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:06.033242 kernel: audit: type=1325 audit(1755046506.031:298): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:06.031000 audit[3446]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:06.035475 kernel: audit: type=1300 audit(1755046506.031:298): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc6e170f90 a2=0 a3=7ffc6e170f7c items=0 ppid=2438 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:06.031000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc6e170f90 a2=0 a3=7ffc6e170f7c items=0 ppid=2438 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:06.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:06.042185 kernel: audit: type=1327 audit(1755046506.031:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:11.435532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667027960.mount: Deactivated successfully. Aug 13 00:55:12.080306 env[1359]: time="2025-08-13T00:55:12.080263211Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:12.094012 env[1359]: time="2025-08-13T00:55:12.093980596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:12.099261 env[1359]: time="2025-08-13T00:55:12.099235837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:12.108526 env[1359]: time="2025-08-13T00:55:12.108497894Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:12.108854 env[1359]: time="2025-08-13T00:55:12.108834534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:55:12.226893 env[1359]: time="2025-08-13T00:55:12.226858829Z" level=info msg="CreateContainer within sandbox \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:55:12.243433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086782578.mount: Deactivated successfully. Aug 13 00:55:12.247008 env[1359]: time="2025-08-13T00:55:12.246982576Z" level=info msg="CreateContainer within sandbox \"73ef72a7f62100a6c53fd99134ee9b8de739426abac1592f101826a37400f844\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0\"" Aug 13 00:55:12.248892 env[1359]: time="2025-08-13T00:55:12.248870617Z" level=info msg="StartContainer for \"4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0\"" Aug 13 00:55:12.290333 env[1359]: time="2025-08-13T00:55:12.290298396Z" level=info msg="StartContainer for \"4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0\" returns successfully" Aug 13 00:55:13.105269 kubelet[2292]: I0813 00:55:13.103039 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5tj8c" podStartSLOduration=1.614183228 podStartE2EDuration="26.085469514s" podCreationTimestamp="2025-08-13 00:54:47 +0000 UTC" firstStartedPulling="2025-08-13 00:54:47.638211923 +0000 UTC m=+16.294889581" lastFinishedPulling="2025-08-13 00:55:12.10949821 +0000 UTC m=+40.766175867" observedRunningTime="2025-08-13 00:55:13.06347321 +0000 UTC m=+41.720150880" watchObservedRunningTime="2025-08-13 00:55:13.085469514 +0000 UTC m=+41.742147178" Aug 13 00:55:13.378972 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:55:13.410651 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:55:14.013763 env[1359]: time="2025-08-13T00:55:14.013726099Z" level=info msg="StopPodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\"" Aug 13 00:55:14.514162 env[1359]: time="2025-08-13T00:55:14.513593813Z" level=info msg="StopPodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\"" Aug 13 00:55:14.514162 env[1359]: time="2025-08-13T00:55:14.514076016Z" level=info msg="StopPodSandbox for \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\"" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.609 [INFO][3558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.609 [INFO][3558] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" iface="eth0" netns="/var/run/netns/cni-76221f5e-4add-720f-ebb0-de5c4d3d92da" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.609 [INFO][3558] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" iface="eth0" netns="/var/run/netns/cni-76221f5e-4add-720f-ebb0-de5c4d3d92da" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.609 [INFO][3558] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" iface="eth0" netns="/var/run/netns/cni-76221f5e-4add-720f-ebb0-de5c4d3d92da" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.609 [INFO][3558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.609 [INFO][3558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.813 [INFO][3570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.818 [INFO][3570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.819 [INFO][3570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.829 [WARNING][3570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.829 [INFO][3570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.830 [INFO][3570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:14.840484 env[1359]: 2025-08-13 00:55:14.834 [INFO][3558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.148 [INFO][3525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.150 [INFO][3525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" iface="eth0" netns="/var/run/netns/cni-7666bfc2-45f0-3213-76c4-d7d1f6a79f08" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.150 [INFO][3525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" iface="eth0" netns="/var/run/netns/cni-7666bfc2-45f0-3213-76c4-d7d1f6a79f08" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.152 [INFO][3525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" iface="eth0" netns="/var/run/netns/cni-7666bfc2-45f0-3213-76c4-d7d1f6a79f08" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.152 [INFO][3525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.152 [INFO][3525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.817 [INFO][3533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.818 [INFO][3533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.830 [INFO][3533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.835 [WARNING][3533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.835 [INFO][3533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.836 [INFO][3533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:14.848943 env[1359]: 2025-08-13 00:55:14.842 [INFO][3525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:55:14.846392 systemd[1]: run-netns-cni\x2d76221f5e\x2d4add\x2d720f\x2debb0\x2dde5c4d3d92da.mount: Deactivated successfully. Aug 13 00:55:14.854162 env[1359]: time="2025-08-13T00:55:14.849632588Z" level=info msg="TearDown network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" successfully" Aug 13 00:55:14.854162 env[1359]: time="2025-08-13T00:55:14.849658020Z" level=info msg="StopPodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" returns successfully" Aug 13 00:55:14.854162 env[1359]: time="2025-08-13T00:55:14.850058626Z" level=info msg="TearDown network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" successfully" Aug 13 00:55:14.854162 env[1359]: time="2025-08-13T00:55:14.850081620Z" level=info msg="StopPodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" returns successfully" Aug 13 00:55:14.848809 systemd[1]: run-netns-cni\x2d7666bfc2\x2d45f0\x2d3213\x2d76c4\x2dd7d1f6a79f08.mount: Deactivated successfully. Aug 13 00:55:14.857159 systemd[1]: run-netns-cni\x2d59db00f9\x2d7122\x2d9bfd\x2dd717\x2de71ae8db7f94.mount: Deactivated successfully. Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.633 [INFO][3557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.633 [INFO][3557] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" iface="eth0" netns="/var/run/netns/cni-59db00f9-7122-9bfd-d717-e71ae8db7f94" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.633 [INFO][3557] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" iface="eth0" netns="/var/run/netns/cni-59db00f9-7122-9bfd-d717-e71ae8db7f94" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.633 [INFO][3557] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" iface="eth0" netns="/var/run/netns/cni-59db00f9-7122-9bfd-d717-e71ae8db7f94" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.633 [INFO][3557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.633 [INFO][3557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.813 [INFO][3576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" HandleID="k8s-pod-network.448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.818 [INFO][3576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.837 [INFO][3576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.850 [WARNING][3576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" HandleID="k8s-pod-network.448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.850 [INFO][3576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" HandleID="k8s-pod-network.448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.851 [INFO][3576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:14.859154 env[1359]: 2025-08-13 00:55:14.854 [INFO][3557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763" Aug 13 00:55:14.859154 env[1359]: time="2025-08-13T00:55:14.857182715Z" level=info msg="TearDown network for sandbox \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\" successfully" Aug 13 00:55:14.859154 env[1359]: time="2025-08-13T00:55:14.857206503Z" level=info msg="StopPodSandbox for \"448232f1bfa784f2fbd6da971e17c61bbae255d88fe672a569d6b85bfa8ab763\" returns successfully" Aug 13 00:55:14.871915 env[1359]: time="2025-08-13T00:55:14.871889579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-l5vzj,Uid:3a86fd1c-50b7-4722-86dc-208e2b22565d,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:55:14.872280 env[1359]: time="2025-08-13T00:55:14.872265641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77c8d6dd-bgt6p,Uid:d3b3d048-dd23-43c6-a58f-53a36c816495,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:55:14.974451 kubelet[2292]: I0813 00:55:14.974405 2292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-ca-bundle\") pod \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\" (UID: \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\") " Aug 13 00:55:14.974860 kubelet[2292]: I0813 00:55:14.974487 2292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-backend-key-pair\") pod \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\" (UID: \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\") " Aug 13 00:55:14.974860 kubelet[2292]: I0813 00:55:14.974515 2292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m24vz\" (UniqueName: \"kubernetes.io/projected/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-kube-api-access-m24vz\") pod \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\" (UID: \"913dce23-f3a0-4e7d-9b8e-f702cec1a3d6\") " Aug 13 00:55:15.000664 kubelet[2292]: I0813 00:55:14.995233 2292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-kube-api-access-m24vz" (OuterVolumeSpecName: "kube-api-access-m24vz") pod "913dce23-f3a0-4e7d-9b8e-f702cec1a3d6" (UID: "913dce23-f3a0-4e7d-9b8e-f702cec1a3d6"). InnerVolumeSpecName "kube-api-access-m24vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:55:15.000853 kubelet[2292]: I0813 00:55:15.000826 2292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "913dce23-f3a0-4e7d-9b8e-f702cec1a3d6" (UID: "913dce23-f3a0-4e7d-9b8e-f702cec1a3d6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:55:15.000898 kubelet[2292]: I0813 00:55:14.995394 2292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "913dce23-f3a0-4e7d-9b8e-f702cec1a3d6" (UID: "913dce23-f3a0-4e7d-9b8e-f702cec1a3d6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:55:15.026557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:15.026640 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif254b4d2622: link becomes ready Aug 13 00:55:15.030262 systemd-networkd[1119]: calif254b4d2622: Link UP Aug 13 00:55:15.030389 systemd-networkd[1119]: calif254b4d2622: Gained carrier Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.935 [INFO][3598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.942 [INFO][3598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0 calico-apiserver-6d77c8d6dd- calico-apiserver d3b3d048-dd23-43c6-a58f-53a36c816495 897 0 2025-08-13 00:54:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d77c8d6dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d77c8d6dd-bgt6p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif254b4d2622 [] [] }} ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.943 [INFO][3598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.967 [INFO][3611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" HandleID="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.967 [INFO][3611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" HandleID="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d77c8d6dd-bgt6p", "timestamp":"2025-08-13 00:55:14.967681576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.968 [INFO][3611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.968 [INFO][3611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.968 [INFO][3611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.976 [INFO][3611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.988 [INFO][3611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.995 [INFO][3611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.996 [INFO][3611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.999 [INFO][3611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:14.999 [INFO][3611] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:15.001 [INFO][3611] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9 Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:15.003 [INFO][3611] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:15.007 [INFO][3611] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:15.007 [INFO][3611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" host="localhost" Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:15.008 [INFO][3611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:15.044468 env[1359]: 2025-08-13 00:55:15.008 [INFO][3611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" HandleID="k8s-pod-network.cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.046700 env[1359]: 2025-08-13 00:55:15.009 [INFO][3598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0", GenerateName:"calico-apiserver-6d77c8d6dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3b3d048-dd23-43c6-a58f-53a36c816495", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77c8d6dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d77c8d6dd-bgt6p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif254b4d2622", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:15.046700 env[1359]: 2025-08-13 00:55:15.009 [INFO][3598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.046700 env[1359]: 2025-08-13 00:55:15.009 [INFO][3598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif254b4d2622 ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.046700 env[1359]: 2025-08-13 00:55:15.026 [INFO][3598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.046700 env[1359]: 2025-08-13 00:55:15.026 [INFO][3598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0", GenerateName:"calico-apiserver-6d77c8d6dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3b3d048-dd23-43c6-a58f-53a36c816495", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77c8d6dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9", Pod:"calico-apiserver-6d77c8d6dd-bgt6p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif254b4d2622", MAC:"26:ae:a5:89:03:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:15.046700 env[1359]: 2025-08-13 00:55:15.042 [INFO][3598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-bgt6p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--bgt6p-eth0" Aug 13 00:55:15.054044 env[1359]: time="2025-08-13T00:55:15.053993410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:15.054205 env[1359]: time="2025-08-13T00:55:15.054028133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:15.054205 env[1359]: time="2025-08-13T00:55:15.054035802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:15.054337 env[1359]: time="2025-08-13T00:55:15.054321048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9 pid=3642 runtime=io.containerd.runc.v2 Aug 13 00:55:15.075832 kubelet[2292]: I0813 00:55:15.074958 2292 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:15.075832 kubelet[2292]: I0813 00:55:15.074983 2292 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:15.075832 kubelet[2292]: I0813 00:55:15.074989 2292 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m24vz\" (UniqueName: \"kubernetes.io/projected/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6-kube-api-access-m24vz\") on node \"localhost\" DevicePath \"\"" Aug 13 00:55:15.122112 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:15.141489 systemd-networkd[1119]: califd2ff4ab659: Link UP Aug 13 00:55:15.143171 systemd-networkd[1119]: califd2ff4ab659: Gained carrier Aug 13 00:55:15.143577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califd2ff4ab659: link becomes ready Aug 13 00:55:15.152000 audit[3712]: AVC avc: denied { write } for pid=3712 comm="tee" name="fd" dev="proc" ino=38121 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.187500 kernel: audit: type=1400 audit(1755046515.152:299): avc: denied { write } for pid=3712 comm="tee" name="fd" dev="proc" ino=38121 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.197342 kernel: audit: type=1400 audit(1755046515.155:300): avc: denied { write } for pid=3715 comm="tee" name="fd" dev="proc" ino=38124 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.197402 kernel: audit: type=1300 audit(1755046515.152:299): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd218267ed a2=241 a3=1b6 items=1 ppid=3666 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.197429 kernel: audit: type=1307 audit(1755046515.152:299): cwd="/etc/service/enabled/confd/log" Aug 13 00:55:15.206100 kernel: audit: type=1400 audit(1755046515.175:301): avc: denied { write } for pid=3717 comm="tee" name="fd" dev="proc" ino=38150 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.213371 kernel: audit: type=1302 audit(1755046515.152:299): item=0 name="/dev/fd/63" inode=38088 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.213411 kernel: audit: type=1327 audit(1755046515.152:299): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.213432 kernel: audit: type=1300 audit(1755046515.155:300): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca850a7ed a2=241 a3=1b6 items=1 ppid=3661 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.213449 kernel: audit: type=1307 audit(1755046515.155:300): cwd="/etc/service/enabled/felix/log" Aug 13 00:55:15.213465 kernel: audit: type=1302 audit(1755046515.155:300): item=0 name="/dev/fd/63" inode=38089 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.155000 audit[3715]: AVC avc: denied { write } for pid=3715 comm="tee" name="fd" dev="proc" ino=38124 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.152000 audit[3712]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd218267ed a2=241 a3=1b6 items=1 ppid=3666 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.152000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 00:55:15.175000 audit[3717]: AVC avc: denied { write } for pid=3717 comm="tee" name="fd" dev="proc" ino=38150 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.152000 audit: PATH item=0 name="/dev/fd/63" inode=38088 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.155000 audit[3715]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca850a7ed a2=241 a3=1b6 items=1 ppid=3661 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.155000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 00:55:15.155000 audit: PATH item=0 name="/dev/fd/63" inode=38089 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:14.913 [INFO][3587] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:14.921 [INFO][3587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0 calico-apiserver-7cbf5687db- calico-apiserver 3a86fd1c-50b7-4722-86dc-208e2b22565d 896 0 2025-08-13 00:54:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cbf5687db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cbf5687db-l5vzj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd2ff4ab659 [] [] }} ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:14.921 [INFO][3587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.001 [INFO][3616] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" HandleID="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.001 [INFO][3616] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" HandleID="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c90c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cbf5687db-l5vzj", "timestamp":"2025-08-13 00:55:15.00153973 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.001 [INFO][3616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.008 [INFO][3616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.009 [INFO][3616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.075 [INFO][3616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.088 [INFO][3616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.095 [INFO][3616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.099 [INFO][3616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.100 [INFO][3616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.100 [INFO][3616] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.102 [INFO][3616] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6 Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.117 [INFO][3616] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.132 [INFO][3616] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.132 [INFO][3616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" host="localhost" Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.132 [INFO][3616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:15.213850 env[1359]: 2025-08-13 00:55:15.132 [INFO][3616] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" HandleID="k8s-pod-network.e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.214281 env[1359]: 2025-08-13 00:55:15.133 [INFO][3587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a86fd1c-50b7-4722-86dc-208e2b22565d", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cbf5687db-l5vzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd2ff4ab659", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:15.214281 env[1359]: 2025-08-13 00:55:15.133 [INFO][3587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.214281 env[1359]: 2025-08-13 00:55:15.133 [INFO][3587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd2ff4ab659 ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.214281 env[1359]: 2025-08-13 00:55:15.143 [INFO][3587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.214281 env[1359]: 2025-08-13 00:55:15.144 [INFO][3587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a86fd1c-50b7-4722-86dc-208e2b22565d", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6", Pod:"calico-apiserver-7cbf5687db-l5vzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd2ff4ab659", MAC:"32:1e:58:da:5c:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:15.214281 env[1359]: 2025-08-13 00:55:15.161 [INFO][3587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-l5vzj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:55:15.214281 env[1359]: time="2025-08-13T00:55:15.188701927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77c8d6dd-bgt6p,Uid:d3b3d048-dd23-43c6-a58f-53a36c816495,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9\"" Aug 13 00:55:15.214281 env[1359]: time="2025-08-13T00:55:15.190110369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:55:15.214281 env[1359]: time="2025-08-13T00:55:15.198472492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:15.214281 env[1359]: time="2025-08-13T00:55:15.198519849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:15.214281 env[1359]: time="2025-08-13T00:55:15.198541686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:15.214781 env[1359]: time="2025-08-13T00:55:15.198679457Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6 pid=3763 runtime=io.containerd.runc.v2 Aug 13 00:55:15.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.175000 audit[3717]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffdc93e7ef a2=241 a3=1b6 items=1 ppid=3665 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.175000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 00:55:15.175000 audit: PATH item=0 name="/dev/fd/63" inode=38090 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.241407 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:15.240000 audit[3740]: AVC avc: denied { write } for pid=3740 comm="tee" name="fd" dev="proc" ino=37473 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.240000 audit[3740]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecd7107dd a2=241 a3=1b6 items=1 ppid=3663 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.240000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:55:15.240000 audit: PATH item=0 name="/dev/fd/63" inode=38125 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.249000 audit[3759]: AVC avc: denied { write } for pid=3759 comm="tee" name="fd" dev="proc" ino=37477 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.249000 audit[3759]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5e9647ed a2=241 a3=1b6 items=1 ppid=3679 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.249000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 00:55:15.249000 audit: PATH item=0 name="/dev/fd/63" inode=38157 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.250000 audit[3790]: AVC avc: denied { write } for pid=3790 comm="tee" name="fd" dev="proc" ino=37481 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.250000 audit[3790]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcca0207de a2=241 a3=1b6 items=1 ppid=3686 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.250000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 00:55:15.250000 audit: PATH item=0 name="/dev/fd/63" inode=38187 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.251000 audit[3775]: AVC avc: denied { write } for pid=3775 comm="tee" name="fd" dev="proc" ino=37485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:55:15.251000 audit[3775]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc70c437ee a2=241 a3=1b6 items=1 ppid=3680 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.251000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 00:55:15.251000 audit: PATH item=0 name="/dev/fd/63" inode=38173 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:55:15.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:55:15.316470 env[1359]: time="2025-08-13T00:55:15.316433943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-l5vzj,Uid:3a86fd1c-50b7-4722-86dc-208e2b22565d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6\"" Aug 13 00:55:15.540406 env[1359]: time="2025-08-13T00:55:15.540377862Z" level=info msg="StopPodSandbox for \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\"" Aug 13 00:55:15.541017 env[1359]: time="2025-08-13T00:55:15.540651982Z" level=info msg="StopPodSandbox for \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\"" Aug 13 00:55:15.541017 env[1359]: time="2025-08-13T00:55:15.540856316Z" level=info msg="StopPodSandbox for \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\"" Aug 13 00:55:15.546609 kubelet[2292]: I0813 00:55:15.546535 2292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913dce23-f3a0-4e7d-9b8e-f702cec1a3d6" path="/var/lib/kubelet/pods/913dce23-f3a0-4e7d-9b8e-f702cec1a3d6/volumes" Aug 13 00:55:15.584512 kubelet[2292]: I0813 00:55:15.584490 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bed0cdb-9639-4fbd-ab32-91d92304f2cb-whisker-ca-bundle\") pod \"whisker-8559899697-pc8q9\" (UID: \"6bed0cdb-9639-4fbd-ab32-91d92304f2cb\") " pod="calico-system/whisker-8559899697-pc8q9" Aug 13 00:55:15.584712 kubelet[2292]: I0813 00:55:15.584698 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph99j\" (UniqueName: \"kubernetes.io/projected/6bed0cdb-9639-4fbd-ab32-91d92304f2cb-kube-api-access-ph99j\") pod \"whisker-8559899697-pc8q9\" (UID: \"6bed0cdb-9639-4fbd-ab32-91d92304f2cb\") " pod="calico-system/whisker-8559899697-pc8q9" Aug 13 00:55:15.585699 kubelet[2292]: I0813 00:55:15.584980 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bed0cdb-9639-4fbd-ab32-91d92304f2cb-whisker-backend-key-pair\") pod \"whisker-8559899697-pc8q9\" (UID: \"6bed0cdb-9639-4fbd-ab32-91d92304f2cb\") " pod="calico-system/whisker-8559899697-pc8q9" Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.584000 audit: BPF prog-id=10 op=LOAD Aug 13 00:55:15.584000 audit[3873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcca5da070 a2=98 a3=1fffffffffffffff items=0 ppid=3662 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.584000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:55:15.584000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit: BPF prog-id=11 op=LOAD Aug 13 00:55:15.588000 audit[3873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcca5d9f50 a2=94 a3=3 items=0 ppid=3662 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.588000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:55:15.588000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit[3873]: AVC avc: denied { bpf } for pid=3873 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.588000 audit: BPF prog-id=12 op=LOAD Aug 13 00:55:15.588000 audit[3873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcca5d9f90 a2=94 a3=7ffcca5da170 items=0 ppid=3662 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.588000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:55:15.592000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:55:15.592000 audit[3873]: AVC avc: denied { perfmon } for pid=3873 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.592000 audit[3873]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffcca5da060 a2=50 a3=a000000085 items=0 ppid=3662 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.592000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.602000 audit: BPF prog-id=13 op=LOAD Aug 13 00:55:15.602000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9d815d00 a2=98 a3=3 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.602000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit: BPF prog-id=14 op=LOAD Aug 13 00:55:15.603000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9d815af0 a2=94 a3=54428f items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.603000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.603000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.603000 audit: BPF prog-id=15 op=LOAD Aug 13 00:55:15.603000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9d815b20 a2=94 a3=2 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.603000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.603000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.637 [INFO][3857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.637 [INFO][3857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" iface="eth0" netns="/var/run/netns/cni-7b5ea00f-906b-a0b4-b383-cc20c936245a" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.637 [INFO][3857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" iface="eth0" netns="/var/run/netns/cni-7b5ea00f-906b-a0b4-b383-cc20c936245a" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.637 [INFO][3857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" iface="eth0" netns="/var/run/netns/cni-7b5ea00f-906b-a0b4-b383-cc20c936245a" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.637 [INFO][3857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.637 [INFO][3857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.721 [INFO][3886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" HandleID="k8s-pod-network.34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Workload="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.722 [INFO][3886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.722 [INFO][3886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.732 [WARNING][3886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" HandleID="k8s-pod-network.34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Workload="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.732 [INFO][3886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" HandleID="k8s-pod-network.34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Workload="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.733 [INFO][3886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:15.736475 env[1359]: 2025-08-13 00:55:15.734 [INFO][3857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433" Aug 13 00:55:15.737997 env[1359]: time="2025-08-13T00:55:15.736604203Z" level=info msg="TearDown network for sandbox \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\" successfully" Aug 13 00:55:15.737997 env[1359]: time="2025-08-13T00:55:15.736623720Z" level=info msg="StopPodSandbox for \"34bbcc9c6281d901a229f5c06a6be283fb294454e5d37725f08310387956b433\" returns successfully" Aug 13 00:55:15.737997 env[1359]: time="2025-08-13T00:55:15.737229168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcc59697d-9mb8b,Uid:21361a1e-b6fe-46a2-b608-047b2089e93b,Namespace:calico-system,Attempt:1,}" Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit: BPF prog-id=16 op=LOAD Aug 13 00:55:15.740000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9d8159e0 a2=94 a3=1 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.740000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:55:15.740000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.740000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe9d815ab0 a2=50 a3=7ffe9d815b90 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9d8159f0 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9d815a20 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9d815930 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9d815a40 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9d815a20 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9d815a10 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9d815a40 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9d815a20 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9d815a40 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9d815a10 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9d815a80 a2=28 a3=0 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe9d815830 a2=50 a3=1 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit: BPF prog-id=17 op=LOAD Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9d815830 a2=94 a3=5 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe9d8158e0 a2=50 a3=1 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe9d815a00 a2=4 a3=38 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.752000 audit[3882]: AVC avc: denied { confidentiality } for pid=3882 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:55:15.752000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe9d815a50 a2=94 a3=6 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { confidentiality } for pid=3882 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:55:15.753000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe9d815200 a2=94 a3=88 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { perfmon } for pid=3882 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { bpf } for pid=3882 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:15.753000 audit[3882]: AVC avc: denied { confidentiality } for pid=3882 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:55:15.753000 audit[3882]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe9d815200 a2=94 a3=88 items=0 ppid=3662 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:15.753000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.667 [INFO][3858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.667 [INFO][3858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" iface="eth0" netns="/var/run/netns/cni-a8317b8c-ca41-1f17-54f9-d030de801140" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.667 [INFO][3858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" iface="eth0" netns="/var/run/netns/cni-a8317b8c-ca41-1f17-54f9-d030de801140" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.667 [INFO][3858] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" iface="eth0" netns="/var/run/netns/cni-a8317b8c-ca41-1f17-54f9-d030de801140" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.667 [INFO][3858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.667 [INFO][3858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.738 [INFO][3894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" HandleID="k8s-pod-network.05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Workload="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.738 [INFO][3894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.738 [INFO][3894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.742 [WARNING][3894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" HandleID="k8s-pod-network.05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Workload="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.742 [INFO][3894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" HandleID="k8s-pod-network.05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Workload="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.743 [INFO][3894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:15.760752 env[1359]: 2025-08-13 00:55:15.757 [INFO][3858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae" Aug 13 00:55:15.760752 env[1359]: time="2025-08-13T00:55:15.760038978Z" level=info msg="TearDown network for sandbox \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\" successfully" Aug 13 00:55:15.760752 env[1359]: time="2025-08-13T00:55:15.760065407Z" level=info msg="StopPodSandbox for \"05f3e61951a931958e00300dffd61ff22559a43c2fa085aa5e93e89564f2abae\" returns successfully" Aug 13 00:55:15.760752 env[1359]: time="2025-08-13T00:55:15.760531184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl8br,Uid:0c33fae7-fe42-4436-aa98-d3a4e41c85e1,Namespace:calico-system,Attempt:1,}" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.655 [INFO][3856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.655 [INFO][3856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" iface="eth0" netns="/var/run/netns/cni-8db7850a-5576-038a-2299-9a23747956f9" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.656 [INFO][3856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" iface="eth0" netns="/var/run/netns/cni-8db7850a-5576-038a-2299-9a23747956f9" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.656 [INFO][3856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" iface="eth0" netns="/var/run/netns/cni-8db7850a-5576-038a-2299-9a23747956f9" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.656 [INFO][3856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.656 [INFO][3856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.743 [INFO][3892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" HandleID="k8s-pod-network.b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Workload="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.744 [INFO][3892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.744 [INFO][3892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.748 [WARNING][3892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" HandleID="k8s-pod-network.b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Workload="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.748 [INFO][3892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" HandleID="k8s-pod-network.b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Workload="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.749 [INFO][3892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:15.762451 env[1359]: 2025-08-13 00:55:15.759 [INFO][3856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051" Aug 13 00:55:15.769161 env[1359]: time="2025-08-13T00:55:15.762588803Z" level=info msg="TearDown network for sandbox \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\" successfully" Aug 13 00:55:15.769161 env[1359]: time="2025-08-13T00:55:15.762605364Z" level=info msg="StopPodSandbox for \"b5ce7847b72ed451e46b87362ce38af8168de7a1dd22b661880f11dd32e49051\" returns successfully" Aug 13 00:55:15.769161 env[1359]: time="2025-08-13T00:55:15.762960725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qx2bj,Uid:416a1a84-00d1-44b4-bc44-a5aa913a36f7,Namespace:kube-system,Attempt:1,}" Aug 13 00:55:15.804167 env[1359]: time="2025-08-13T00:55:15.804091647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8559899697-pc8q9,Uid:6bed0cdb-9639-4fbd-ab32-91d92304f2cb,Namespace:calico-system,Attempt:0,}" Aug 13 00:55:15.849063 systemd[1]: run-netns-cni\x2d7b5ea00f\x2d906b\x2da0b4\x2db383\x2dcc20c936245a.mount: Deactivated successfully. Aug 13 00:55:15.849146 systemd[1]: run-netns-cni\x2da8317b8c\x2dca41\x2d1f17\x2d54f9\x2dd030de801140.mount: Deactivated successfully. Aug 13 00:55:15.849196 systemd[1]: run-netns-cni\x2d8db7850a\x2d5576\x2d038a\x2d2299\x2d9a23747956f9.mount: Deactivated successfully. Aug 13 00:55:15.849248 systemd[1]: var-lib-kubelet-pods-913dce23\x2df3a0\x2d4e7d\x2d9b8e\x2df702cec1a3d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm24vz.mount: Deactivated successfully. Aug 13 00:55:15.849300 systemd[1]: var-lib-kubelet-pods-913dce23\x2df3a0\x2d4e7d\x2d9b8e\x2df702cec1a3d6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.006000 audit: BPF prog-id=18 op=LOAD Aug 13 00:55:16.006000 audit[3957]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe885d24d0 a2=98 a3=1999999999999999 items=0 ppid=3662 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.006000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:55:16.011000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.011000 audit: BPF prog-id=19 op=LOAD Aug 13 00:55:16.011000 audit[3957]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe885d23b0 a2=94 a3=ffff items=0 ppid=3662 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.011000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:55:16.011000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { perfmon } for pid=3957 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit[3957]: AVC avc: denied { bpf } for pid=3957 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.012000 audit: BPF prog-id=20 op=LOAD Aug 13 00:55:16.012000 audit[3957]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe885d23f0 a2=94 a3=7ffe885d25d0 items=0 ppid=3662 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.012000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:55:16.012000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:55:16.037030 systemd-networkd[1119]: cali999013cbadf: Link UP Aug 13 00:55:16.043518 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:16.043614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali999013cbadf: link becomes ready Aug 13 00:55:16.044540 systemd-networkd[1119]: cali999013cbadf: Gained carrier Aug 13 00:55:16.045370 kubelet[2292]: I0813 00:55:16.045291 2292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.901 [INFO][3913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0 calico-kube-controllers-7bcc59697d- calico-system 21361a1e-b6fe-46a2-b608-047b2089e93b 925 0 2025-08-13 00:54:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bcc59697d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bcc59697d-9mb8b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali999013cbadf [] [] }} ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.901 [INFO][3913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.969 [INFO][3924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" HandleID="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Workload="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.969 [INFO][3924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" HandleID="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Workload="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000250ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bcc59697d-9mb8b", "timestamp":"2025-08-13 00:55:15.969270549 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.969 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.969 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.969 [INFO][3924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.979 [INFO][3924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.988 [INFO][3924] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.995 [INFO][3924] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:15.997 [INFO][3924] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.000 [INFO][3924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.000 [INFO][3924] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.004 [INFO][3924] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9 Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.010 [INFO][3924] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.020 [INFO][3924] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.020 [INFO][3924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" host="localhost" Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.020 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:16.089948 env[1359]: 2025-08-13 00:55:16.020 [INFO][3924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" HandleID="k8s-pod-network.48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Workload="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.092471 env[1359]: 2025-08-13 00:55:16.024 [INFO][3913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0", GenerateName:"calico-kube-controllers-7bcc59697d-", Namespace:"calico-system", SelfLink:"", UID:"21361a1e-b6fe-46a2-b608-047b2089e93b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bcc59697d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bcc59697d-9mb8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali999013cbadf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.092471 env[1359]: 2025-08-13 00:55:16.024 [INFO][3913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.092471 env[1359]: 2025-08-13 00:55:16.024 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali999013cbadf ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.092471 env[1359]: 2025-08-13 00:55:16.039 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.092471 env[1359]: 2025-08-13 00:55:16.046 [INFO][3913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0", GenerateName:"calico-kube-controllers-7bcc59697d-", Namespace:"calico-system", SelfLink:"", UID:"21361a1e-b6fe-46a2-b608-047b2089e93b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bcc59697d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9", Pod:"calico-kube-controllers-7bcc59697d-9mb8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali999013cbadf", MAC:"e2:49:86:50:98:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.092471 env[1359]: 2025-08-13 00:55:16.074 [INFO][3913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9" Namespace="calico-system" Pod="calico-kube-controllers-7bcc59697d-9mb8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bcc59697d--9mb8b-eth0" Aug 13 00:55:16.137883 systemd-networkd[1119]: vxlan.calico: Link UP Aug 13 00:55:16.137888 systemd-networkd[1119]: vxlan.calico: Gained carrier Aug 13 00:55:16.208456 env[1359]: time="2025-08-13T00:55:16.205131359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:16.208456 env[1359]: time="2025-08-13T00:55:16.205164813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:16.208456 env[1359]: time="2025-08-13T00:55:16.205174034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:16.208456 env[1359]: time="2025-08-13T00:55:16.205274584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9 pid=4030 runtime=io.containerd.runc.v2 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.208000 audit: BPF prog-id=21 op=LOAD Aug 13 00:55:16.208000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2223aca0 a2=98 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.208000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.208000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit: BPF prog-id=22 op=LOAD Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2223aab0 a2=94 a3=54428f items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit: BPF prog-id=23 op=LOAD Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2223aae0 a2=94 a3=2 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff2223a9b0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff2223a9e0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff2223a8f0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff2223aa00 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff2223a9e0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff2223a9d0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff2223aa00 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff2223a9e0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff2223aa00 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff2223a9d0 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff2223aa40 a2=28 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit: BPF prog-id=24 op=LOAD Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff2223a8b0 a2=94 a3=0 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fff2223a8a0 a2=50 a3=2800 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fff2223a8a0 a2=50 a3=2800 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit: BPF prog-id=25 op=LOAD Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff2223a0c0 a2=94 a3=2 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.210000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { perfmon } for pid=4041 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit[4041]: AVC avc: denied { bpf } for pid=4041 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.210000 audit: BPF prog-id=26 op=LOAD Aug 13 00:55:16.210000 audit[4041]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff2223a1c0 a2=94 a3=30 items=0 ppid=3662 pid=4041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit: BPF prog-id=27 op=LOAD Aug 13 00:55:16.213000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd06378860 a2=98 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.213000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.213000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.213000 audit: BPF prog-id=28 op=LOAD Aug 13 00:55:16.213000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd06378650 a2=94 a3=54428f items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.213000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.214000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.214000 audit: BPF prog-id=29 op=LOAD Aug 13 00:55:16.214000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd06378680 a2=94 a3=2 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.214000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.214000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:55:16.232942 systemd-networkd[1119]: cali3a432d14999: Link UP Aug 13 00:55:16.237261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3a432d14999: link becomes ready Aug 13 00:55:16.236690 systemd-networkd[1119]: cali3a432d14999: Gained carrier Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.003 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0 coredns-7c65d6cfc9- kube-system 416a1a84-00d1-44b4-bc44-a5aa913a36f7 927 0 2025-08-13 00:54:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qx2bj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a432d14999 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.003 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.147 [INFO][3959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" HandleID="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Workload="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.147 [INFO][3959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" HandleID="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Workload="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qx2bj", "timestamp":"2025-08-13 00:55:16.147017458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.147 [INFO][3959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.147 [INFO][3959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.147 [INFO][3959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.157 [INFO][3959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.169 [INFO][3959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.173 [INFO][3959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.178 [INFO][3959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.180 [INFO][3959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.180 [INFO][3959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.183 [INFO][3959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.186 [INFO][3959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.193 [INFO][3959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.193 [INFO][3959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" host="localhost" Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.193 [INFO][3959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:16.254685 env[1359]: 2025-08-13 00:55:16.193 [INFO][3959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" HandleID="k8s-pod-network.830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Workload="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.255297 env[1359]: 2025-08-13 00:55:16.207 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"416a1a84-00d1-44b4-bc44-a5aa913a36f7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qx2bj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a432d14999", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.255297 env[1359]: 2025-08-13 00:55:16.207 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.255297 env[1359]: 2025-08-13 00:55:16.207 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a432d14999 ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.255297 env[1359]: 2025-08-13 00:55:16.239 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.255297 env[1359]: 2025-08-13 00:55:16.239 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"416a1a84-00d1-44b4-bc44-a5aa913a36f7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b", Pod:"coredns-7c65d6cfc9-qx2bj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a432d14999", MAC:"b6:02:c0:b5:9c:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.255297 env[1359]: 2025-08-13 00:55:16.249 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qx2bj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qx2bj-eth0" Aug 13 00:55:16.330713 systemd-networkd[1119]: califd2ff4ab659: Gained IPv6LL Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.363000 audit: BPF prog-id=30 op=LOAD Aug 13 00:55:16.363000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd06378540 a2=94 a3=1 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.363000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.364000 audit: BPF prog-id=30 op=UNLOAD Aug 13 00:55:16.364000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.364000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd06378610 a2=50 a3=7ffd063786f0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.364000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd06378550 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd06378580 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd06378490 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd063785a0 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd06378580 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd06378570 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd063785a0 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd06378580 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd063785a0 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd06378570 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd063785e0 a2=28 a3=0 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd06378390 a2=50 a3=1 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit: BPF prog-id=31 op=LOAD Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd06378390 a2=94 a3=5 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit: BPF prog-id=31 op=UNLOAD Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd06378440 a2=50 a3=1 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd06378560 a2=4 a3=38 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.399000 audit[4046]: AVC avc: denied { confidentiality } for pid=4046 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:55:16.399000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd063785b0 a2=94 a3=6 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.400000 audit[4046]: AVC avc: denied { confidentiality } for pid=4046 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:55:16.400000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd06377d60 a2=94 a3=88 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.400000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { perfmon } for pid=4046 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { confidentiality } for pid=4046 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:55:16.405000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd06377d60 a2=94 a3=88 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.405000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd06379790 a2=10 a3=f8f00800 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.405000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd06379630 a2=10 a3=3 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.405000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd063795d0 a2=10 a3=3 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.405000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.405000 audit[4046]: AVC avc: denied { bpf } for pid=4046 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:55:16.405000 audit[4046]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd063795d0 a2=10 a3=7 items=0 ppid=3662 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.405000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:55:16.414000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:55:16.437406 env[1359]: time="2025-08-13T00:55:16.437311679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:16.437602 env[1359]: time="2025-08-13T00:55:16.437416087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:16.437602 env[1359]: time="2025-08-13T00:55:16.437441726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:16.437766 env[1359]: time="2025-08-13T00:55:16.437637805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b pid=4102 runtime=io.containerd.runc.v2 Aug 13 00:55:16.462360 systemd-networkd[1119]: calidfb8091eec7: Link UP Aug 13 00:55:16.471288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidfb8091eec7: link becomes ready Aug 13 00:55:16.470712 systemd-networkd[1119]: calidfb8091eec7: Gained carrier Aug 13 00:55:16.516393 env[1359]: time="2025-08-13T00:55:16.516343538Z" level=info msg="StopPodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\"" Aug 13 00:55:16.518064 env[1359]: time="2025-08-13T00:55:16.516494086Z" level=info msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" Aug 13 00:55:16.522900 env[1359]: time="2025-08-13T00:55:16.517221429Z" level=info msg="StopPodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\"" Aug 13 00:55:16.522693 systemd-networkd[1119]: calif254b4d2622: Gained IPv6LL Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.114 [INFO][3940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--kl8br-eth0 goldmane-58fd7646b9- calico-system 0c33fae7-fe42-4436-aa98-d3a4e41c85e1 928 0 2025-08-13 00:54:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-kl8br eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidfb8091eec7 [] [] }} ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.115 [INFO][3940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.214 [INFO][3989] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" HandleID="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Workload="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.214 [INFO][3989] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" HandleID="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Workload="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-kl8br", "timestamp":"2025-08-13 00:55:16.21406225 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.214 [INFO][3989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.214 [INFO][3989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.214 [INFO][3989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.257 [INFO][3989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.263 [INFO][3989] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.276 [INFO][3989] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.278 [INFO][3989] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.284 [INFO][3989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.284 [INFO][3989] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.286 [INFO][3989] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.294 [INFO][3989] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.326 [INFO][3989] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.326 [INFO][3989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" host="localhost" Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.327 [INFO][3989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:16.539846 env[1359]: 2025-08-13 00:55:16.327 [INFO][3989] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" HandleID="k8s-pod-network.adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Workload="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.540548 env[1359]: 2025-08-13 00:55:16.352 [INFO][3940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--kl8br-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0c33fae7-fe42-4436-aa98-d3a4e41c85e1", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-kl8br", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidfb8091eec7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.540548 env[1359]: 2025-08-13 00:55:16.352 [INFO][3940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.540548 env[1359]: 2025-08-13 00:55:16.352 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfb8091eec7 ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.540548 env[1359]: 2025-08-13 00:55:16.472 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.540548 env[1359]: 2025-08-13 00:55:16.482 [INFO][3940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--kl8br-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0c33fae7-fe42-4436-aa98-d3a4e41c85e1", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f", Pod:"goldmane-58fd7646b9-kl8br", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidfb8091eec7", MAC:"0a:78:dd:b5:8c:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.540548 env[1359]: 2025-08-13 00:55:16.501 [INFO][3940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f" Namespace="calico-system" Pod="goldmane-58fd7646b9-kl8br" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--kl8br-eth0" Aug 13 00:55:16.553663 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:16.565303 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:16.657000 audit[4216]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:16.657000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd04ea6200 a2=0 a3=7ffd04ea61ec items=0 ppid=3662 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.657000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:16.672000 audit[4220]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=4220 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:16.672000 audit[4220]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd670faca0 a2=0 a3=7ffd670fac8c items=0 ppid=3662 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:16.676321 env[1359]: time="2025-08-13T00:55:16.676288795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qx2bj,Uid:416a1a84-00d1-44b4-bc44-a5aa913a36f7,Namespace:kube-system,Attempt:1,} returns sandbox id \"830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b\"" Aug 13 00:55:16.686036 env[1359]: time="2025-08-13T00:55:16.686012121Z" level=info msg="CreateContainer within sandbox \"830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:16.688000 audit[4215]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:16.688000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff0045de20 a2=0 a3=7fff0045de0c items=0 ppid=3662 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.688000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:16.699931 systemd-networkd[1119]: cali0089c654530: Link UP Aug 13 00:55:16.703087 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0089c654530: link becomes ready Aug 13 00:55:16.702671 systemd-networkd[1119]: cali0089c654530: Gained carrier Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.157 [INFO][3962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8559899697--pc8q9-eth0 whisker-8559899697- calico-system 6bed0cdb-9639-4fbd-ab32-91d92304f2cb 923 0 2025-08-13 00:55:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8559899697 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8559899697-pc8q9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0089c654530 [] [] }} ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.157 [INFO][3962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.495 [INFO][4024] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" HandleID="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Workload="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.495 [INFO][4024] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" HandleID="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Workload="localhost-k8s-whisker--8559899697--pc8q9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5f40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8559899697-pc8q9", "timestamp":"2025-08-13 00:55:16.495471221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.495 [INFO][4024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.495 [INFO][4024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.495 [INFO][4024] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.547 [INFO][4024] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.592 [INFO][4024] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.614 [INFO][4024] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.621 [INFO][4024] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.631 [INFO][4024] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.631 [INFO][4024] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.643 [INFO][4024] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4 Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.655 [INFO][4024] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.672 [INFO][4024] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.672 [INFO][4024] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" host="localhost" Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.672 [INFO][4024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:16.729727 env[1359]: 2025-08-13 00:55:16.672 [INFO][4024] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" HandleID="k8s-pod-network.db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Workload="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.730249 env[1359]: 2025-08-13 00:55:16.692 [INFO][3962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8559899697--pc8q9-eth0", GenerateName:"whisker-8559899697-", Namespace:"calico-system", SelfLink:"", UID:"6bed0cdb-9639-4fbd-ab32-91d92304f2cb", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8559899697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8559899697-pc8q9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0089c654530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.730249 env[1359]: 2025-08-13 00:55:16.693 [INFO][3962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.730249 env[1359]: 2025-08-13 00:55:16.693 [INFO][3962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0089c654530 ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.730249 env[1359]: 2025-08-13 00:55:16.703 [INFO][3962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.730249 env[1359]: 2025-08-13 00:55:16.703 [INFO][3962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8559899697--pc8q9-eth0", GenerateName:"whisker-8559899697-", Namespace:"calico-system", SelfLink:"", UID:"6bed0cdb-9639-4fbd-ab32-91d92304f2cb", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8559899697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4", Pod:"whisker-8559899697-pc8q9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0089c654530", MAC:"62:79:4c:b3:16:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:16.730249 env[1359]: 2025-08-13 00:55:16.725 [INFO][3962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4" Namespace="calico-system" Pod="whisker-8559899697-pc8q9" WorkloadEndpoint="localhost-k8s-whisker--8559899697--pc8q9-eth0" Aug 13 00:55:16.716000 audit[4226]: NETFILTER_CFG table=filter:104 family=2 entries=150 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:16.716000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=85768 a0=3 a1=7ffe93f10a60 a2=0 a3=7ffe93f10a4c items=0 ppid=3662 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.716000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:16.787702 env[1359]: time="2025-08-13T00:55:16.785290821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:16.787702 env[1359]: time="2025-08-13T00:55:16.785324121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:16.787702 env[1359]: time="2025-08-13T00:55:16.785331743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:16.787702 env[1359]: time="2025-08-13T00:55:16.785443453Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f pid=4250 runtime=io.containerd.runc.v2 Aug 13 00:55:16.793236 env[1359]: time="2025-08-13T00:55:16.792883415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcc59697d-9mb8b,Uid:21361a1e-b6fe-46a2-b608-047b2089e93b,Namespace:calico-system,Attempt:1,} returns sandbox id \"48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9\"" Aug 13 00:55:16.799000 audit[4280]: NETFILTER_CFG table=filter:105 family=2 entries=131 op=nft_register_chain pid=4280 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:16.799000 audit[4280]: SYSCALL arch=c000003e syscall=46 success=yes exit=77176 a0=3 a1=7ffe81a39a40 a2=0 a3=7ffe81a39a2c items=0 ppid=3662 pid=4280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:16.799000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:16.814702 env[1359]: time="2025-08-13T00:55:16.814663848Z" level=info msg="CreateContainer within sandbox \"830071546512a207fb0925d461e43015dc496fbc753107de0900bfebc770744b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d8b7b11453355e02d1971b177edfe29b84a0e0c2696f5bc6352d44e614fbc2a\"" Aug 13 00:55:16.815330 env[1359]: time="2025-08-13T00:55:16.815315699Z" level=info msg="StartContainer for \"5d8b7b11453355e02d1971b177edfe29b84a0e0c2696f5bc6352d44e614fbc2a\"" Aug 13 00:55:16.832423 env[1359]: time="2025-08-13T00:55:16.832374548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:16.833518 env[1359]: time="2025-08-13T00:55:16.832427907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:16.833518 env[1359]: time="2025-08-13T00:55:16.832444404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:16.833518 env[1359]: time="2025-08-13T00:55:16.832547673Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4 pid=4304 runtime=io.containerd.runc.v2 Aug 13 00:55:16.866617 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:17.001617 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:17.003540 env[1359]: time="2025-08-13T00:55:17.003509357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-kl8br,Uid:0c33fae7-fe42-4436-aa98-d3a4e41c85e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f\"" Aug 13 00:55:17.122218 env[1359]: time="2025-08-13T00:55:17.122149266Z" level=info msg="StartContainer for \"5d8b7b11453355e02d1971b177edfe29b84a0e0c2696f5bc6352d44e614fbc2a\" returns successfully" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:16.979 [INFO][4246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:16.987 [INFO][4246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" iface="eth0" netns="/var/run/netns/cni-e0958249-4b50-034b-f22c-c4ef25b61225" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:16.987 [INFO][4246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" iface="eth0" netns="/var/run/netns/cni-e0958249-4b50-034b-f22c-c4ef25b61225" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:16.989 [INFO][4246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" iface="eth0" netns="/var/run/netns/cni-e0958249-4b50-034b-f22c-c4ef25b61225" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:16.989 [INFO][4246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:16.989 [INFO][4246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.111 [INFO][4375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.111 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.111 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.119 [WARNING][4375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.120 [INFO][4375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.122 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:17.127448 env[1359]: 2025-08-13 00:55:17.125 [INFO][4246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:17.131464 env[1359]: time="2025-08-13T00:55:17.127679439Z" level=info msg="TearDown network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" successfully" Aug 13 00:55:17.131464 env[1359]: time="2025-08-13T00:55:17.127708988Z" level=info msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" returns successfully" Aug 13 00:55:17.131464 env[1359]: time="2025-08-13T00:55:17.129943485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-jtpwn,Uid:5ec0c637-128e-47db-8784-76084717fd4b,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.036 [INFO][4258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.038 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" iface="eth0" netns="/var/run/netns/cni-1a90f2fd-c712-7fec-3306-565815e2027b" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.038 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" iface="eth0" netns="/var/run/netns/cni-1a90f2fd-c712-7fec-3306-565815e2027b" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.038 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" iface="eth0" netns="/var/run/netns/cni-1a90f2fd-c712-7fec-3306-565815e2027b" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.038 [INFO][4258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.038 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.121 [INFO][4387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.121 [INFO][4387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.124 [INFO][4387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.132 [WARNING][4387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.132 [INFO][4387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.133 [INFO][4387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:17.137579 env[1359]: 2025-08-13 00:55:17.135 [INFO][4258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:55:17.153087 env[1359]: time="2025-08-13T00:55:17.137696673Z" level=info msg="TearDown network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" successfully" Aug 13 00:55:17.153087 env[1359]: time="2025-08-13T00:55:17.137717192Z" level=info msg="StopPodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" returns successfully" Aug 13 00:55:17.153087 env[1359]: time="2025-08-13T00:55:17.138081595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8559899697-pc8q9,Uid:6bed0cdb-9639-4fbd-ab32-91d92304f2cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4\"" Aug 13 00:55:17.153087 env[1359]: time="2025-08-13T00:55:17.138346821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2fnk,Uid:f7dd644a-3303-4570-9359-66c16da8794d,Namespace:calico-system,Attempt:1,}" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:16.913 [INFO][4224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:16.942 [INFO][4224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" iface="eth0" netns="/var/run/netns/cni-0e36b57c-1c70-409f-7614-f6950fe332d2" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:16.942 [INFO][4224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" iface="eth0" netns="/var/run/netns/cni-0e36b57c-1c70-409f-7614-f6950fe332d2" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:16.942 [INFO][4224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" iface="eth0" netns="/var/run/netns/cni-0e36b57c-1c70-409f-7614-f6950fe332d2" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:16.942 [INFO][4224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:16.942 [INFO][4224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.144 [INFO][4366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.144 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.144 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.157 [WARNING][4366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.157 [INFO][4366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.160 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:17.179136 env[1359]: 2025-08-13 00:55:17.162 [INFO][4224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:55:17.179136 env[1359]: time="2025-08-13T00:55:17.164139842Z" level=info msg="TearDown network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" successfully" Aug 13 00:55:17.179136 env[1359]: time="2025-08-13T00:55:17.164169481Z" level=info msg="StopPodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" returns successfully" Aug 13 00:55:17.179136 env[1359]: time="2025-08-13T00:55:17.164554343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p64qm,Uid:4e691412-289d-4857-95a4-f28ceeef2595,Namespace:kube-system,Attempt:1,}" Aug 13 00:55:17.459017 systemd-networkd[1119]: cali30dadbcd7c0: Link UP Aug 13 00:55:17.461211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:17.462167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali30dadbcd7c0: link becomes ready Aug 13 00:55:17.461723 systemd-networkd[1119]: cali30dadbcd7c0: Gained carrier Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.326 [INFO][4413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0 calico-apiserver-7cbf5687db- calico-apiserver 5ec0c637-128e-47db-8784-76084717fd4b 952 0 2025-08-13 00:54:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cbf5687db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cbf5687db-jtpwn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30dadbcd7c0 [] [] }} ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.326 [INFO][4413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.383 [INFO][4439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" HandleID="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.383 [INFO][4439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" HandleID="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cbf5687db-jtpwn", "timestamp":"2025-08-13 00:55:17.383690579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.383 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.383 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.383 [INFO][4439] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.396 [INFO][4439] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.401 [INFO][4439] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.404 [INFO][4439] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.406 [INFO][4439] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.411 [INFO][4439] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.411 [INFO][4439] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.413 [INFO][4439] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.419 [INFO][4439] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.428 [INFO][4439] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.428 [INFO][4439] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" host="localhost" Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.428 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:17.482635 env[1359]: 2025-08-13 00:55:17.428 [INFO][4439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" HandleID="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.486177 env[1359]: 2025-08-13 00:55:17.430 [INFO][4413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec0c637-128e-47db-8784-76084717fd4b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cbf5687db-jtpwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30dadbcd7c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:17.486177 env[1359]: 2025-08-13 00:55:17.431 [INFO][4413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.486177 env[1359]: 2025-08-13 00:55:17.431 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30dadbcd7c0 ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.486177 env[1359]: 2025-08-13 00:55:17.461 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.486177 env[1359]: 2025-08-13 00:55:17.463 [INFO][4413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec0c637-128e-47db-8784-76084717fd4b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf", Pod:"calico-apiserver-7cbf5687db-jtpwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30dadbcd7c0", MAC:"aa:57:8a:36:e0:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:17.486177 env[1359]: 2025-08-13 00:55:17.472 [INFO][4413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Namespace="calico-apiserver" Pod="calico-apiserver-7cbf5687db-jtpwn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:17.483246 systemd-networkd[1119]: cali3a432d14999: Gained IPv6LL Aug 13 00:55:17.531000 audit[4488]: NETFILTER_CFG table=filter:106 family=2 entries=59 op=nft_register_chain pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:17.531000 audit[4488]: SYSCALL arch=c000003e syscall=46 success=yes exit=29476 a0=3 a1=7ffd92eb6970 a2=0 a3=7ffd92eb695c items=0 ppid=3662 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.531000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:17.560788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib575ca47f0d: link becomes ready Aug 13 00:55:17.561061 systemd-networkd[1119]: calib575ca47f0d: Link UP Aug 13 00:55:17.561159 systemd-networkd[1119]: calib575ca47f0d: Gained carrier Aug 13 00:55:17.569357 env[1359]: time="2025-08-13T00:55:17.569298869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:17.569511 env[1359]: time="2025-08-13T00:55:17.569342486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:17.569511 env[1359]: time="2025-08-13T00:55:17.569353819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:17.569632 env[1359]: time="2025-08-13T00:55:17.569507858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf pid=4491 runtime=io.containerd.runc.v2 Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.396 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q2fnk-eth0 csi-node-driver- calico-system f7dd644a-3303-4570-9359-66c16da8794d 954 0 2025-08-13 00:54:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q2fnk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib575ca47f0d [] [] }} ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.397 [INFO][4418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.460 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" HandleID="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.461 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" HandleID="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002deff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q2fnk", "timestamp":"2025-08-13 00:55:17.460830788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.471 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.472 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.472 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.490 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.501 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.506 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.510 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.520 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.520 [INFO][4460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.522 [INFO][4460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96 Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.534 [INFO][4460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.543 [INFO][4460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.544 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" host="localhost" Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.544 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:17.594837 env[1359]: 2025-08-13 00:55:17.544 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" HandleID="k8s-pod-network.2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.595772 env[1359]: 2025-08-13 00:55:17.549 [INFO][4418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q2fnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7dd644a-3303-4570-9359-66c16da8794d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q2fnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib575ca47f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:17.595772 env[1359]: 2025-08-13 00:55:17.549 [INFO][4418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.595772 env[1359]: 2025-08-13 00:55:17.549 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib575ca47f0d ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.595772 env[1359]: 2025-08-13 00:55:17.560 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.595772 env[1359]: 2025-08-13 00:55:17.565 [INFO][4418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q2fnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7dd644a-3303-4570-9359-66c16da8794d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96", Pod:"csi-node-driver-q2fnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib575ca47f0d", MAC:"92:8a:2c:e7:c7:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:17.595772 env[1359]: 2025-08-13 00:55:17.589 [INFO][4418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96" Namespace="calico-system" Pod="csi-node-driver-q2fnk" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:55:17.614000 audit[4521]: NETFILTER_CFG table=filter:107 family=2 entries=52 op=nft_register_chain pid=4521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:17.614000 audit[4521]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7fffb0bfea80 a2=0 a3=7fffb0bfea6c items=0 ppid=3662 pid=4521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.614000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:17.635653 env[1359]: time="2025-08-13T00:55:17.635337326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:17.635653 env[1359]: time="2025-08-13T00:55:17.635411081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:17.635653 env[1359]: time="2025-08-13T00:55:17.635429817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:17.635989 env[1359]: time="2025-08-13T00:55:17.635950604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96 pid=4526 runtime=io.containerd.runc.v2 Aug 13 00:55:17.648933 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:17.676118 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:17.685314 env[1359]: time="2025-08-13T00:55:17.685279318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbf5687db-jtpwn,Uid:5ec0c637-128e-47db-8784-76084717fd4b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf\"" Aug 13 00:55:17.685407 env[1359]: time="2025-08-13T00:55:17.685217336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2fnk,Uid:f7dd644a-3303-4570-9359-66c16da8794d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96\"" Aug 13 00:55:17.697693 systemd-networkd[1119]: caliaa60dbc83a3: Link UP Aug 13 00:55:17.699834 systemd-networkd[1119]: caliaa60dbc83a3: Gained carrier Aug 13 00:55:17.717006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaa60dbc83a3: link becomes ready Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.419 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0 coredns-7c65d6cfc9- kube-system 4e691412-289d-4857-95a4-f28ceeef2595 951 0 2025-08-13 00:54:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-p64qm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa60dbc83a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.419 [INFO][4427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.540 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" HandleID="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.540 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" HandleID="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-p64qm", "timestamp":"2025-08-13 00:55:17.540780769 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.541 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.546 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.546 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.593 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.603 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.617 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.627 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.634 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.634 [INFO][4466] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.638 [INFO][4466] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407 Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.662 [INFO][4466] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.692 [INFO][4466] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.692 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" host="localhost" Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.692 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:17.746028 env[1359]: 2025-08-13 00:55:17.692 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" HandleID="k8s-pod-network.37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.752738 env[1359]: 2025-08-13 00:55:17.694 [INFO][4427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e691412-289d-4857-95a4-f28ceeef2595", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-p64qm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa60dbc83a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:17.752738 env[1359]: 2025-08-13 00:55:17.694 [INFO][4427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.752738 env[1359]: 2025-08-13 00:55:17.694 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa60dbc83a3 ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.752738 env[1359]: 2025-08-13 00:55:17.700 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.752738 env[1359]: 2025-08-13 00:55:17.700 [INFO][4427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e691412-289d-4857-95a4-f28ceeef2595", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407", Pod:"coredns-7c65d6cfc9-p64qm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa60dbc83a3", MAC:"b2:49:d5:1c:6a:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:17.752738 env[1359]: 2025-08-13 00:55:17.742 [INFO][4427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p64qm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:55:17.800736 systemd-networkd[1119]: cali999013cbadf: Gained IPv6LL Aug 13 00:55:17.850249 systemd[1]: run-containerd-runc-k8s.io-5d8b7b11453355e02d1971b177edfe29b84a0e0c2696f5bc6352d44e614fbc2a-runc.7AlrcT.mount: Deactivated successfully. Aug 13 00:55:17.850401 systemd[1]: run-netns-cni\x2d0e36b57c\x2d1c70\x2d409f\x2d7614\x2df6950fe332d2.mount: Deactivated successfully. Aug 13 00:55:17.850494 systemd[1]: run-netns-cni\x2de0958249\x2d4b50\x2d034b\x2df22c\x2dc4ef25b61225.mount: Deactivated successfully. Aug 13 00:55:17.850584 systemd[1]: run-netns-cni\x2d1a90f2fd\x2dc712\x2d7fec\x2d3306\x2d565815e2027b.mount: Deactivated successfully. Aug 13 00:55:17.868409 systemd-networkd[1119]: calidfb8091eec7: Gained IPv6LL Aug 13 00:55:17.868916 systemd-networkd[1119]: vxlan.calico: Gained IPv6LL Aug 13 00:55:17.894000 audit[4582]: NETFILTER_CFG table=filter:108 family=2 entries=52 op=nft_register_chain pid=4582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:17.894000 audit[4582]: SYSCALL arch=c000003e syscall=46 success=yes exit=23876 a0=3 a1=7ffe2ecc7ab0 a2=0 a3=7ffe2ecc7a9c items=0 ppid=3662 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:17.894000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:17.917814 env[1359]: time="2025-08-13T00:55:17.917762658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:17.917962 env[1359]: time="2025-08-13T00:55:17.917795492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:17.917962 env[1359]: time="2025-08-13T00:55:17.917803195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:17.918155 env[1359]: time="2025-08-13T00:55:17.918121755Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407 pid=4590 runtime=io.containerd.runc.v2 Aug 13 00:55:17.972606 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:18.012176 env[1359]: time="2025-08-13T00:55:18.012142066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p64qm,Uid:4e691412-289d-4857-95a4-f28ceeef2595,Namespace:kube-system,Attempt:1,} returns sandbox id \"37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407\"" Aug 13 00:55:18.018786 env[1359]: time="2025-08-13T00:55:18.018756267Z" level=info msg="CreateContainer within sandbox \"37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:18.053285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617398889.mount: Deactivated successfully. Aug 13 00:55:18.083912 env[1359]: time="2025-08-13T00:55:18.083880579Z" level=info msg="CreateContainer within sandbox \"37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f9bda8770e801155f68162da5d0a1f06a0956ed01694aa16347009ef165134f\"" Aug 13 00:55:18.440725 systemd-networkd[1119]: cali0089c654530: Gained IPv6LL Aug 13 00:55:18.824753 systemd-networkd[1119]: cali30dadbcd7c0: Gained IPv6LL Aug 13 00:55:18.873539 env[1359]: time="2025-08-13T00:55:18.873499112Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:18.879738 env[1359]: time="2025-08-13T00:55:18.879709090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:18.882202 env[1359]: time="2025-08-13T00:55:18.882163439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:18.883585 env[1359]: time="2025-08-13T00:55:18.883540302Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:18.884785 env[1359]: time="2025-08-13T00:55:18.884751599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:55:18.954022 systemd-networkd[1119]: calib575ca47f0d: Gained IPv6LL Aug 13 00:55:19.327028 env[1359]: time="2025-08-13T00:55:19.326825495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:55:19.342151 env[1359]: time="2025-08-13T00:55:19.342109775Z" level=info msg="StartContainer for \"9f9bda8770e801155f68162da5d0a1f06a0956ed01694aa16347009ef165134f\"" Aug 13 00:55:19.362606 env[1359]: time="2025-08-13T00:55:19.361621325Z" level=info msg="CreateContainer within sandbox \"cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:55:19.378403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494674512.mount: Deactivated successfully. Aug 13 00:55:19.399844 env[1359]: time="2025-08-13T00:55:19.399805470Z" level=info msg="CreateContainer within sandbox \"cd655e642c539a4efbfda5150e7a761942c3fb2b80d82b393917d57107c453d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6166750ff6bc4ec51e6405f6e2c0ae49808abfdb5462c23279ece92291fe7a36\"" Aug 13 00:55:19.411647 env[1359]: time="2025-08-13T00:55:19.410441335Z" level=info msg="StartContainer for \"6166750ff6bc4ec51e6405f6e2c0ae49808abfdb5462c23279ece92291fe7a36\"" Aug 13 00:55:19.417361 kubelet[2292]: I0813 00:55:19.415036 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qx2bj" podStartSLOduration=44.409775821 podStartE2EDuration="44.409775821s" podCreationTimestamp="2025-08-13 00:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:19.378970781 +0000 UTC m=+48.035648453" watchObservedRunningTime="2025-08-13 00:55:19.409775821 +0000 UTC m=+48.066453485" Aug 13 00:55:19.446509 env[1359]: time="2025-08-13T00:55:19.436673916Z" level=info msg="StartContainer for \"9f9bda8770e801155f68162da5d0a1f06a0956ed01694aa16347009ef165134f\" returns successfully" Aug 13 00:55:19.482692 env[1359]: time="2025-08-13T00:55:19.482652491Z" level=info msg="StartContainer for \"6166750ff6bc4ec51e6405f6e2c0ae49808abfdb5462c23279ece92291fe7a36\" returns successfully" Aug 13 00:55:19.587000 audit[4700]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=4700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:19.587000 audit[4700]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc966a57f0 a2=0 a3=7ffc966a57dc items=0 ppid=2438 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:19.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:19.593730 systemd-networkd[1119]: caliaa60dbc83a3: Gained IPv6LL Aug 13 00:55:19.594000 audit[4700]: NETFILTER_CFG table=nat:110 family=2 entries=35 op=nft_register_chain pid=4700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:19.594000 audit[4700]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc966a57f0 a2=0 a3=7ffc966a57dc items=0 ppid=2438 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:19.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:19.673000 audit[4703]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:19.673000 audit[4703]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd59ec500 a2=0 a3=7fffd59ec4ec items=0 ppid=2438 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:19.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:19.678000 audit[4703]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:19.678000 audit[4703]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd59ec500 a2=0 a3=7fffd59ec4ec items=0 ppid=2438 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:19.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:19.732006 env[1359]: time="2025-08-13T00:55:19.731970189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:19.732991 env[1359]: time="2025-08-13T00:55:19.732973458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:19.733910 env[1359]: time="2025-08-13T00:55:19.733896181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:19.734599 env[1359]: time="2025-08-13T00:55:19.734572579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:19.735290 env[1359]: time="2025-08-13T00:55:19.735270495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:55:19.736097 env[1359]: time="2025-08-13T00:55:19.736080314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:55:19.737054 env[1359]: time="2025-08-13T00:55:19.737008327Z" level=info msg="CreateContainer within sandbox \"e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:55:19.754795 env[1359]: time="2025-08-13T00:55:19.754760360Z" level=info msg="CreateContainer within sandbox \"e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"400eb0d5d54cf6031d4369c52744caec0f49555ab2b73bf3e3cc735d2a2b496e\"" Aug 13 00:55:19.755234 env[1359]: time="2025-08-13T00:55:19.755209916Z" level=info msg="StartContainer for \"400eb0d5d54cf6031d4369c52744caec0f49555ab2b73bf3e3cc735d2a2b496e\"" Aug 13 00:55:19.866276 env[1359]: time="2025-08-13T00:55:19.865290110Z" level=info msg="StartContainer for \"400eb0d5d54cf6031d4369c52744caec0f49555ab2b73bf3e3cc735d2a2b496e\" returns successfully" Aug 13 00:55:20.387927 kubelet[2292]: I0813 00:55:20.387884 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p64qm" podStartSLOduration=45.387871309 podStartE2EDuration="45.387871309s" podCreationTimestamp="2025-08-13 00:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:20.38640538 +0000 UTC m=+49.043083052" watchObservedRunningTime="2025-08-13 00:55:20.387871309 +0000 UTC m=+49.044548972" Aug 13 00:55:20.418515 kubelet[2292]: I0813 00:55:20.418479 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cbf5687db-l5vzj" podStartSLOduration=32.028693166 podStartE2EDuration="36.418464386s" podCreationTimestamp="2025-08-13 00:54:44 +0000 UTC" firstStartedPulling="2025-08-13 00:55:15.345960685 +0000 UTC m=+44.002638343" lastFinishedPulling="2025-08-13 00:55:19.7357319 +0000 UTC m=+48.392409563" observedRunningTime="2025-08-13 00:55:20.407500606 +0000 UTC m=+49.064178274" watchObservedRunningTime="2025-08-13 00:55:20.418464386 +0000 UTC m=+49.075142049" Aug 13 00:55:20.700595 kernel: kauditd_printk_skb: 571 callbacks suppressed Aug 13 00:55:20.702097 kernel: audit: type=1325 audit(1755046520.697:416): table=filter:113 family=2 entries=14 op=nft_register_rule pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:20.703673 kernel: audit: type=1300 audit(1755046520.697:416): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc30966830 a2=0 a3=7ffc3096681c items=0 ppid=2438 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:20.697000 audit[4750]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:20.697000 audit[4750]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc30966830 a2=0 a3=7ffc3096681c items=0 ppid=2438 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:20.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:20.709584 kernel: audit: type=1327 audit(1755046520.697:416): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:20.725000 audit[4750]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:20.725000 audit[4750]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc30966830 a2=0 a3=7ffc3096681c items=0 ppid=2438 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:20.734179 kernel: audit: type=1325 audit(1755046520.725:417): table=nat:114 family=2 entries=56 op=nft_register_chain pid=4750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:20.734281 kernel: audit: type=1300 audit(1755046520.725:417): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc30966830 a2=0 a3=7ffc3096681c items=0 ppid=2438 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:20.734301 kernel: audit: type=1327 audit(1755046520.725:417): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:20.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:21.414166 kubelet[2292]: I0813 00:55:21.414128 2292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:55:21.621842 kubelet[2292]: I0813 00:55:21.621584 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d77c8d6dd-bgt6p" podStartSLOduration=32.496618067 podStartE2EDuration="36.621570548s" podCreationTimestamp="2025-08-13 00:54:45 +0000 UTC" firstStartedPulling="2025-08-13 00:55:15.189708366 +0000 UTC m=+43.846386023" lastFinishedPulling="2025-08-13 00:55:19.314660836 +0000 UTC m=+47.971338504" observedRunningTime="2025-08-13 00:55:20.419224034 +0000 UTC m=+49.075901699" watchObservedRunningTime="2025-08-13 00:55:21.621570548 +0000 UTC m=+50.278248212" Aug 13 00:55:21.686814 kernel: audit: type=1325 audit(1755046521.678:418): table=filter:115 family=2 entries=13 op=nft_register_rule pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.686927 kernel: audit: type=1300 audit(1755046521.678:418): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc1e40cb30 a2=0 a3=7ffc1e40cb1c items=0 ppid=2438 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.678000 audit[4755]: NETFILTER_CFG table=filter:115 family=2 entries=13 op=nft_register_rule pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.690065 kernel: audit: type=1327 audit(1755046521.678:418): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:21.692414 kernel: audit: type=1325 audit(1755046521.687:419): table=nat:116 family=2 entries=27 op=nft_register_chain pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.678000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc1e40cb30 a2=0 a3=7ffc1e40cb1c items=0 ppid=2438 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:21.687000 audit[4755]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:21.687000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc1e40cb30 a2=0 a3=7ffc1e40cb1c items=0 ppid=2438 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:21.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:23.431467 env[1359]: time="2025-08-13T00:55:23.431438584Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:23.443908 env[1359]: time="2025-08-13T00:55:23.436150766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:23.443908 env[1359]: time="2025-08-13T00:55:23.438473069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:23.443908 env[1359]: time="2025-08-13T00:55:23.442128238Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:23.443908 env[1359]: time="2025-08-13T00:55:23.442688655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:55:23.454816 env[1359]: time="2025-08-13T00:55:23.454790979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:55:23.874593 env[1359]: time="2025-08-13T00:55:23.874524708Z" level=info msg="CreateContainer within sandbox \"48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:55:23.978744 env[1359]: time="2025-08-13T00:55:23.978696558Z" level=info msg="CreateContainer within sandbox \"48a6639e441187333d36da5879cb2616fc918cd96dc4fe365070abe34f9cd9b9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376\"" Aug 13 00:55:23.982177 env[1359]: time="2025-08-13T00:55:23.981159954Z" level=info msg="StartContainer for \"cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376\"" Aug 13 00:55:24.079081 env[1359]: time="2025-08-13T00:55:24.079037418Z" level=info msg="StartContainer for \"cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376\" returns successfully" Aug 13 00:55:24.752895 kubelet[2292]: I0813 00:55:24.733753 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bcc59697d-9mb8b" podStartSLOduration=31.060635736 podStartE2EDuration="37.706538681s" podCreationTimestamp="2025-08-13 00:54:47 +0000 UTC" firstStartedPulling="2025-08-13 00:55:16.801413233 +0000 UTC m=+45.458090894" lastFinishedPulling="2025-08-13 00:55:23.447316174 +0000 UTC m=+52.103993839" observedRunningTime="2025-08-13 00:55:24.684715842 +0000 UTC m=+53.341393511" watchObservedRunningTime="2025-08-13 00:55:24.706538681 +0000 UTC m=+53.363216345" Aug 13 00:55:25.484831 kubelet[2292]: I0813 00:55:25.484766 2292 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:55:25.738320 systemd[1]: run-containerd-runc-k8s.io-cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376-runc.x56RXW.mount: Deactivated successfully. Aug 13 00:55:26.857556 systemd[1]: run-containerd-runc-k8s.io-cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376-runc.rs68Ge.mount: Deactivated successfully. Aug 13 00:55:27.785441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567977093.mount: Deactivated successfully. Aug 13 00:55:29.705000 audit[4877]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:29.888210 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 00:55:29.971829 kernel: audit: type=1325 audit(1755046529.705:420): table=filter:117 family=2 entries=12 op=nft_register_rule pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:29.998473 kernel: audit: type=1300 audit(1755046529.705:420): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc65eb4c50 a2=0 a3=7ffc65eb4c3c items=0 ppid=2438 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:29.998549 kernel: audit: type=1327 audit(1755046529.705:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:29.998597 kernel: audit: type=1325 audit(1755046529.716:421): table=nat:118 family=2 entries=34 op=nft_register_chain pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:29.998631 kernel: audit: type=1300 audit(1755046529.716:421): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc65eb4c50 a2=0 a3=7ffc65eb4c3c items=0 ppid=2438 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:30.010473 kernel: audit: type=1327 audit(1755046529.716:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:29.705000 audit[4877]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc65eb4c50 a2=0 a3=7ffc65eb4c3c items=0 ppid=2438 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:29.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:29.716000 audit[4877]: NETFILTER_CFG table=nat:118 family=2 entries=34 op=nft_register_chain pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:29.716000 audit[4877]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc65eb4c50 a2=0 a3=7ffc65eb4c3c items=0 ppid=2438 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:29.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:30.157556 env[1359]: time="2025-08-13T00:55:29.895875658Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:30.157556 env[1359]: time="2025-08-13T00:55:29.927879113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:30.157556 env[1359]: time="2025-08-13T00:55:29.958301305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:30.157556 env[1359]: time="2025-08-13T00:55:29.976994560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:55:30.157556 env[1359]: time="2025-08-13T00:55:29.976430145Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:30.741679 env[1359]: time="2025-08-13T00:55:30.741484253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:55:31.010416 kubelet[2292]: I0813 00:55:30.923919 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c511538-4ed9-4440-87c2-83635efecdfa-calico-apiserver-certs\") pod \"calico-apiserver-6d77c8d6dd-fkw2g\" (UID: \"1c511538-4ed9-4440-87c2-83635efecdfa\") " pod="calico-apiserver/calico-apiserver-6d77c8d6dd-fkw2g" Aug 13 00:55:31.120180 kubelet[2292]: I0813 00:55:31.120146 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9q2\" (UniqueName: \"kubernetes.io/projected/1c511538-4ed9-4440-87c2-83635efecdfa-kube-api-access-pd9q2\") pod \"calico-apiserver-6d77c8d6dd-fkw2g\" (UID: \"1c511538-4ed9-4440-87c2-83635efecdfa\") " pod="calico-apiserver/calico-apiserver-6d77c8d6dd-fkw2g" Aug 13 00:55:31.121090 env[1359]: time="2025-08-13T00:55:31.121059854Z" level=info msg="CreateContainer within sandbox \"adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:55:31.193549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644716733.mount: Deactivated successfully. Aug 13 00:55:31.233502 env[1359]: time="2025-08-13T00:55:31.208613024Z" level=info msg="CreateContainer within sandbox \"adfad51626d819da98acbabfbe2e2761aaeb052753b26249953c4d8f2177453f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4\"" Aug 13 00:55:31.290199 env[1359]: time="2025-08-13T00:55:31.289632141Z" level=info msg="StartContainer for \"3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4\"" Aug 13 00:55:31.464369 env[1359]: time="2025-08-13T00:55:31.464338694Z" level=info msg="StartContainer for \"3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4\" returns successfully" Aug 13 00:55:32.184434 systemd[1]: run-containerd-runc-k8s.io-cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376-runc.pyikrv.mount: Deactivated successfully. Aug 13 00:55:32.243984 env[1359]: time="2025-08-13T00:55:32.243940990Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:32.264222 env[1359]: time="2025-08-13T00:55:32.264180818Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:32.270042 env[1359]: time="2025-08-13T00:55:32.270010358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:32.289002 env[1359]: time="2025-08-13T00:55:32.288977761Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:32.290012 env[1359]: time="2025-08-13T00:55:32.289993385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:55:32.693109 env[1359]: time="2025-08-13T00:55:32.692930087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:55:32.803388 env[1359]: time="2025-08-13T00:55:32.803163215Z" level=info msg="CreateContainer within sandbox \"db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:55:32.828285 env[1359]: time="2025-08-13T00:55:32.828263887Z" level=info msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" Aug 13 00:55:32.834936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602582399.mount: Deactivated successfully. Aug 13 00:55:32.849103 env[1359]: time="2025-08-13T00:55:32.849077190Z" level=info msg="CreateContainer within sandbox \"db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e09bb705168040e0272738c3c0d60044e2e115e5acb9e90c1feb5fbca0802f73\"" Aug 13 00:55:32.851947 env[1359]: time="2025-08-13T00:55:32.851907675Z" level=info msg="StartContainer for \"e09bb705168040e0272738c3c0d60044e2e115e5acb9e90c1feb5fbca0802f73\"" Aug 13 00:55:32.934257 env[1359]: time="2025-08-13T00:55:32.934227417Z" level=info msg="StartContainer for \"e09bb705168040e0272738c3c0d60044e2e115e5acb9e90c1feb5fbca0802f73\" returns successfully" Aug 13 00:55:33.009785 env[1359]: time="2025-08-13T00:55:33.009723169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77c8d6dd-fkw2g,Uid:1c511538-4ed9-4440-87c2-83635efecdfa,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:55:34.746038 env[1359]: time="2025-08-13T00:55:34.746000457Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:34.826590 env[1359]: time="2025-08-13T00:55:34.781121890Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:34.826590 env[1359]: time="2025-08-13T00:55:34.788190492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:34.826590 env[1359]: time="2025-08-13T00:55:34.799871174Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:34.826590 env[1359]: time="2025-08-13T00:55:34.800008363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:55:35.582000 audit[5021]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:35.849705 kernel: audit: type=1325 audit(1755046535.582:422): table=filter:119 family=2 entries=12 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:35.969860 kernel: audit: type=1300 audit(1755046535.582:422): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffee8f2c0d0 a2=0 a3=7ffee8f2c0bc items=0 ppid=2438 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:35.969945 kernel: audit: type=1327 audit(1755046535.582:422): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:35.986430 kernel: audit: type=1325 audit(1755046535.600:423): table=nat:120 family=2 entries=22 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:36.004826 kernel: audit: type=1300 audit(1755046535.600:423): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffee8f2c0d0 a2=0 a3=7ffee8f2c0bc items=0 ppid=2438 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:36.004932 kernel: audit: type=1327 audit(1755046535.600:423): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:35.582000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffee8f2c0d0 a2=0 a3=7ffee8f2c0bc items=0 ppid=2438 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:35.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:35.600000 audit[5021]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=5021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:55:35.600000 audit[5021]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffee8f2c0d0 a2=0 a3=7ffee8f2c0bc items=0 ppid=2438 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:35.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:55:38.027282 env[1359]: time="2025-08-13T00:55:38.027145912Z" level=error msg="ExecSync for \"3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:37.339 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec0c637-128e-47db-8784-76084717fd4b", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf", Pod:"calico-apiserver-7cbf5687db-jtpwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30dadbcd7c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:37.373 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:37.373 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" iface="eth0" netns="" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:37.375 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:37.375 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.285 [INFO][5029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.294 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.507 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.513 [WARNING][5029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.513 [INFO][5029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.514 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:39.564142 env[1359]: 2025-08-13 00:55:39.519 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:39.631306 env[1359]: time="2025-08-13T00:55:39.564159136Z" level=info msg="TearDown network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" successfully" Aug 13 00:55:39.631306 env[1359]: time="2025-08-13T00:55:39.564188031Z" level=info msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" returns successfully" Aug 13 00:55:39.753098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:39.769248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2d5fadcb713: link becomes ready Aug 13 00:55:39.798550 systemd-networkd[1119]: cali2d5fadcb713: Link UP Aug 13 00:55:39.799196 systemd-networkd[1119]: cali2d5fadcb713: Gained carrier Aug 13 00:55:39.908961 env[1359]: time="2025-08-13T00:55:39.908768572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:37.340 [INFO][4995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0 calico-apiserver-6d77c8d6dd- calico-apiserver 1c511538-4ed9-4440-87c2-83635efecdfa 1070 0 2025-08-13 00:55:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d77c8d6dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d77c8d6dd-fkw2g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d5fadcb713 [] [] }} ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:37.375 [INFO][4995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.285 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" HandleID="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.296 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" HandleID="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001225e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d77c8d6dd-fkw2g", "timestamp":"2025-08-13 00:55:39.28591262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.296 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.297 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.297 [INFO][5032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.374 [INFO][5032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.481 [INFO][5032] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.486 [INFO][5032] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.487 [INFO][5032] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.489 [INFO][5032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.489 [INFO][5032] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.491 [INFO][5032] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5 Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.493 [INFO][5032] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.506 [INFO][5032] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.507 [INFO][5032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" host="localhost" Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.507 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:40.353618 env[1359]: 2025-08-13 00:55:39.507 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" HandleID="k8s-pod-network.af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Workload="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.356409 env[1359]: 2025-08-13 00:55:39.624 [INFO][4995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0", GenerateName:"calico-apiserver-6d77c8d6dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c511538-4ed9-4440-87c2-83635efecdfa", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 55, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77c8d6dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d77c8d6dd-fkw2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d5fadcb713", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:40.356409 env[1359]: 2025-08-13 00:55:39.655 [INFO][4995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.356409 env[1359]: 2025-08-13 00:55:39.655 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d5fadcb713 ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.356409 env[1359]: 2025-08-13 00:55:39.756 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.356409 env[1359]: 2025-08-13 00:55:39.765 [INFO][4995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0", GenerateName:"calico-apiserver-6d77c8d6dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c511538-4ed9-4440-87c2-83635efecdfa", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 55, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77c8d6dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5", Pod:"calico-apiserver-6d77c8d6dd-fkw2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d5fadcb713", MAC:"3a:98:69:78:fb:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:40.356409 env[1359]: 2025-08-13 00:55:40.337 [INFO][4995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77c8d6dd-fkw2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77c8d6dd--fkw2g-eth0" Aug 13 00:55:40.476374 env[1359]: time="2025-08-13T00:55:40.476314198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:40.476374 env[1359]: time="2025-08-13T00:55:40.476346556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:40.476653 env[1359]: time="2025-08-13T00:55:40.476367636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:40.477097 env[1359]: time="2025-08-13T00:55:40.476801202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5 pid=5069 runtime=io.containerd.runc.v2 Aug 13 00:55:40.515392 systemd[1]: run-containerd-runc-k8s.io-af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5-runc.AWe8g0.mount: Deactivated successfully. Aug 13 00:55:40.550614 systemd-resolved[1280]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:40.574830 env[1359]: time="2025-08-13T00:55:40.574801789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77c8d6dd-fkw2g,Uid:1c511538-4ed9-4440-87c2-83635efecdfa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5\"" Aug 13 00:55:40.578743 env[1359]: time="2025-08-13T00:55:40.578725465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:40.581857 env[1359]: time="2025-08-13T00:55:40.581836606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:40.583303 env[1359]: time="2025-08-13T00:55:40.583287546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:40.585478 env[1359]: time="2025-08-13T00:55:40.585461340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:40.585954 env[1359]: time="2025-08-13T00:55:40.585705524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:55:40.646000 audit[5104]: NETFILTER_CFG table=filter:121 family=2 entries=61 op=nft_register_chain pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:40.662673 kernel: audit: type=1325 audit(1755046540.646:424): table=filter:121 family=2 entries=61 op=nft_register_chain pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:40.666722 kernel: audit: type=1300 audit(1755046540.646:424): arch=c000003e syscall=46 success=yes exit=28984 a0=3 a1=7ffc47bcb910 a2=0 a3=7ffc47bcb8fc items=0 ppid=3662 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:40.666774 kernel: audit: type=1327 audit(1755046540.646:424): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:40.646000 audit[5104]: SYSCALL arch=c000003e syscall=46 success=yes exit=28984 a0=3 a1=7ffc47bcb910 a2=0 a3=7ffc47bcb8fc items=0 ppid=3662 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:40.646000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:40.840707 systemd-networkd[1119]: cali2d5fadcb713: Gained IPv6LL Aug 13 00:55:41.012538 kubelet[2292]: E0813 00:55:40.874065 2292 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4" cmd=["/health","-ready"] Aug 13 00:55:41.046898 env[1359]: time="2025-08-13T00:55:41.046863887Z" level=info msg="RemovePodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" Aug 13 00:55:41.047015 env[1359]: time="2025-08-13T00:55:41.046895027Z" level=info msg="Forcibly stopping sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" Aug 13 00:55:41.163542 kubelet[2292]: I0813 00:55:41.144087 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-kl8br" podStartSLOduration=41.495727529 podStartE2EDuration="55.136722867s" podCreationTimestamp="2025-08-13 00:54:46 +0000 UTC" firstStartedPulling="2025-08-13 00:55:17.004410698 +0000 UTC m=+45.661088357" lastFinishedPulling="2025-08-13 00:55:30.645406034 +0000 UTC m=+59.302083695" observedRunningTime="2025-08-13 00:55:33.30345407 +0000 UTC m=+61.960131744" watchObservedRunningTime="2025-08-13 00:55:41.136722867 +0000 UTC m=+69.793400534" Aug 13 00:55:41.188860 env[1359]: time="2025-08-13T00:55:41.188777220Z" level=info msg="CreateContainer within sandbox \"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:55:41.189615 env[1359]: time="2025-08-13T00:55:41.189311672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:55:41.200832 kubelet[2292]: E0813 00:55:41.200083 2292 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.809s" Aug 13 00:55:41.211271 env[1359]: time="2025-08-13T00:55:41.209028056Z" level=info msg="CreateContainer within sandbox \"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:55:41.250620 env[1359]: time="2025-08-13T00:55:41.250586386Z" level=info msg="CreateContainer within sandbox \"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\"" Aug 13 00:55:41.258681 env[1359]: time="2025-08-13T00:55:41.258645270Z" level=info msg="CreateContainer within sandbox \"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a436bff13d7099e6c7fa44af1469709bd28f31910afc71781d46400bdd7f1fcb\"" Aug 13 00:55:41.259880 env[1359]: time="2025-08-13T00:55:41.259808883Z" level=info msg="CreateContainer within sandbox \"af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:55:41.259983 env[1359]: time="2025-08-13T00:55:41.259962035Z" level=info msg="StartContainer for \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\"" Aug 13 00:55:41.264801 env[1359]: time="2025-08-13T00:55:41.264376791Z" level=info msg="StartContainer for \"a436bff13d7099e6c7fa44af1469709bd28f31910afc71781d46400bdd7f1fcb\"" Aug 13 00:55:41.279354 env[1359]: time="2025-08-13T00:55:41.279271217Z" level=info msg="CreateContainer within sandbox \"af2e128c20f654b9c8b4670bae6c270586f53ffc4b3bf603db05e5a6e2cf4cd5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7346efb18e4439e34c721f5b0f610faa4e9e13866d486e50b53fc60960b15791\"" Aug 13 00:55:41.295271 env[1359]: time="2025-08-13T00:55:41.293720424Z" level=info msg="StartContainer for \"7346efb18e4439e34c721f5b0f610faa4e9e13866d486e50b53fc60960b15791\"" Aug 13 00:55:41.341455 env[1359]: time="2025-08-13T00:55:41.341421917Z" level=info msg="StartContainer for \"a436bff13d7099e6c7fa44af1469709bd28f31910afc71781d46400bdd7f1fcb\" returns successfully" Aug 13 00:55:41.409431 env[1359]: time="2025-08-13T00:55:41.409395695Z" level=info msg="StartContainer for \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\" returns successfully" Aug 13 00:55:41.409611 env[1359]: time="2025-08-13T00:55:41.409594527Z" level=info msg="StartContainer for \"7346efb18e4439e34c721f5b0f610faa4e9e13866d486e50b53fc60960b15791\" returns successfully" Aug 13 00:55:41.482948 systemd[1]: run-containerd-runc-k8s.io-3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4-runc.IylMmt.mount: Deactivated successfully. Aug 13 00:55:42.707441 env[1359]: time="2025-08-13T00:55:42.707409951Z" level=info msg="StopContainer for \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\" with timeout 30 (s)" Aug 13 00:55:42.736290 env[1359]: time="2025-08-13T00:55:42.707815086Z" level=info msg="Stop container \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\" with signal terminated" Aug 13 00:55:42.763288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4-rootfs.mount: Deactivated successfully. Aug 13 00:55:42.776614 env[1359]: time="2025-08-13T00:55:42.767201684Z" level=info msg="shim disconnected" id=aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4 Aug 13 00:55:42.776614 env[1359]: time="2025-08-13T00:55:42.767235260Z" level=warning msg="cleaning up after shim disconnected" id=aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4 namespace=k8s.io Aug 13 00:55:42.776614 env[1359]: time="2025-08-13T00:55:42.767241681Z" level=info msg="cleaning up dead shim" Aug 13 00:55:42.790773 env[1359]: time="2025-08-13T00:55:42.790746537Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5262 runtime=io.containerd.runc.v2\n" Aug 13 00:55:43.231572 env[1359]: time="2025-08-13T00:55:43.231471350Z" level=info msg="StopContainer for \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\" returns successfully" Aug 13 00:55:44.511600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257232168.mount: Deactivated successfully. Aug 13 00:55:45.106202 env[1359]: time="2025-08-13T00:55:45.106176394Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:45.131262 env[1359]: time="2025-08-13T00:55:45.130261712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:45.134572 env[1359]: time="2025-08-13T00:55:45.134549555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:45.143696 env[1359]: time="2025-08-13T00:55:45.138265556Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:45.143696 env[1359]: time="2025-08-13T00:55:45.138643010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:45.944 [WARNING][5114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec0c637-128e-47db-8784-76084717fd4b", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf", Pod:"calico-apiserver-7cbf5687db-jtpwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30dadbcd7c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:45.999 [INFO][5114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:46.002 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" iface="eth0" netns="" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:46.005 [INFO][5114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:46.005 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.274 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.284 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.287 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.336 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.343 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" HandleID="k8s-pod-network.b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.345 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:55:47.412470 env[1359]: 2025-08-13 00:55:47.361 [INFO][5114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:55:47.412470 env[1359]: time="2025-08-13T00:55:47.384768311Z" level=info msg="TearDown network for sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" successfully" Aug 13 00:55:47.647078 env[1359]: time="2025-08-13T00:55:47.487173556Z" level=info msg="RemovePodSandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" returns successfully" Aug 13 00:55:48.161940 env[1359]: time="2025-08-13T00:55:48.161630139Z" level=info msg="StopPodSandbox for \"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf\"" Aug 13 00:55:48.161940 env[1359]: time="2025-08-13T00:55:48.161676796Z" level=info msg="Container to stop \"aee3444c5323427207f553546ff052a218f3c31b9f62b943468afb0a4035dfe4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:55:48.161940 env[1359]: time="2025-08-13T00:55:48.161628269Z" level=info msg="StopPodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\"" Aug 13 00:55:48.199371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf-shm.mount: Deactivated successfully. Aug 13 00:55:48.225301 env[1359]: time="2025-08-13T00:55:48.225271090Z" level=info msg="shim disconnected" id=657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf Aug 13 00:55:48.225517 env[1359]: time="2025-08-13T00:55:48.225506875Z" level=warning msg="cleaning up after shim disconnected" id=657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf namespace=k8s.io Aug 13 00:55:48.225592 env[1359]: time="2025-08-13T00:55:48.225582165Z" level=info msg="cleaning up dead shim" Aug 13 00:55:48.225970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf-rootfs.mount: Deactivated successfully. Aug 13 00:55:48.249637 env[1359]: time="2025-08-13T00:55:48.249612623Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5315 runtime=io.containerd.runc.v2\n" Aug 13 00:55:48.428764 env[1359]: time="2025-08-13T00:55:48.427676086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:55:53.537779 env[1359]: time="2025-08-13T00:55:53.503275029Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:53.537779 env[1359]: time="2025-08-13T00:55:53.524695101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:53.537779 env[1359]: time="2025-08-13T00:55:53.533275149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:53.537779 env[1359]: time="2025-08-13T00:55:53.538747639Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:53.537779 env[1359]: time="2025-08-13T00:55:53.539049486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:55:57.909484 systemd-networkd[1119]: cali30dadbcd7c0: Link DOWN Aug 13 00:55:57.909491 systemd-networkd[1119]: cali30dadbcd7c0: Lost carrier Aug 13 00:55:58.648542 env[1359]: time="2025-08-13T00:55:58.648505244Z" level=error msg="ExecSync for \"4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Aug 13 00:55:58.751930 kernel: audit: type=1325 audit(1755046558.719:425): table=filter:122 family=2 entries=67 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:58.762957 kernel: audit: type=1300 audit(1755046558.719:425): arch=c000003e syscall=46 success=yes exit=11440 a0=3 a1=7ffd9ead2380 a2=0 a3=7ffd9ead236c items=0 ppid=3662 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:58.765775 kernel: audit: type=1327 audit(1755046558.719:425): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:58.765804 kernel: audit: type=1325 audit(1755046558.726:426): table=filter:123 family=2 entries=4 op=nft_unregister_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:58.771635 kernel: audit: type=1300 audit(1755046558.726:426): arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffd9ead2380 a2=0 a3=55b4a2e32000 items=0 ppid=3662 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:58.774647 kernel: audit: type=1327 audit(1755046558.726:426): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:58.719000 audit[5409]: NETFILTER_CFG table=filter:122 family=2 entries=67 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:58.719000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=11440 a0=3 a1=7ffd9ead2380 a2=0 a3=7ffd9ead236c items=0 ppid=3662 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:58.719000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:55:58.726000 audit[5409]: NETFILTER_CFG table=filter:123 family=2 entries=4 op=nft_unregister_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:55:58.726000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffd9ead2380 a2=0 a3=55b4a2e32000 items=0 ppid=3662 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:55:58.726000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:54.299 [WARNING][5342] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" WorkloadEndpoint="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:54.376 [INFO][5342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:54.379 [INFO][5342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" iface="eth0" netns="" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:54.382 [INFO][5342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:54.382 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:58.817 [INFO][5381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:58.828 [INFO][5381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:55:58.830 [INFO][5381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:56:00.715 [WARNING][5381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:56:00.766 [INFO][5381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:56:00.874 [INFO][5381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:00.981430 env[1359]: 2025-08-13 00:56:00.929 [INFO][5342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:00.981430 env[1359]: time="2025-08-13T00:56:00.982543658Z" level=info msg="TearDown network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" successfully" Aug 13 00:56:00.981430 env[1359]: time="2025-08-13T00:56:00.982582541Z" level=info msg="StopPodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" returns successfully" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:57.490 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:57.684 [INFO][5343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" iface="eth0" netns="/var/run/netns/cni-beb72141-1fb7-4c9c-2b55-82bc65ee1a18" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:57.723 [INFO][5343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" iface="eth0" netns="/var/run/netns/cni-beb72141-1fb7-4c9c-2b55-82bc65ee1a18" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:57.799 [INFO][5343] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" after=112.726404ms iface="eth0" netns="/var/run/netns/cni-beb72141-1fb7-4c9c-2b55-82bc65ee1a18" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:57.802 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:57.802 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:59.116 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" HandleID="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:55:59.116 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:56:00.873 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:56:01.453 [INFO][5397] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" HandleID="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:56:01.502 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" HandleID="k8s-pod-network.657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Workload="localhost-k8s-calico--apiserver--7cbf5687db--jtpwn-eth0" Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:56:01.515 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:01.614388 env[1359]: 2025-08-13 00:56:01.551 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf" Aug 13 00:56:01.684861 env[1359]: time="2025-08-13T00:56:01.630637375Z" level=info msg="TearDown network for sandbox \"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf\" successfully" Aug 13 00:56:01.684861 env[1359]: time="2025-08-13T00:56:01.630671066Z" level=info msg="StopPodSandbox for \"657ba44fa3f3f1a566633050ffe20efe8766043db556c5b849b1f555168a4ddf\" returns successfully" Aug 13 00:56:01.724523 systemd[1]: run-netns-cni\x2dbeb72141\x2d1fb7\x2d4c9c\x2d2b55\x2d82bc65ee1a18.mount: Deactivated successfully. Aug 13 00:56:05.608046 env[1359]: time="2025-08-13T00:56:05.607851448Z" level=info msg="RemovePodSandbox for \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\"" Aug 13 00:56:05.608046 env[1359]: time="2025-08-13T00:56:05.607973098Z" level=info msg="Forcibly stopping sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\"" Aug 13 00:56:05.608046 env[1359]: time="2025-08-13T00:56:05.608342041Z" level=info msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\"" Aug 13 00:56:05.608046 env[1359]: time="2025-08-13T00:56:05.608375416Z" level=error msg="StopPodSandbox for \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": not found" Aug 13 00:56:05.793507 kubelet[2292]: E0813 00:56:05.787608 2292 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Aug 13 00:56:10.195000 audit[5490]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:10.428055 kernel: audit: type=1325 audit(1755046570.195:427): table=filter:124 family=2 entries=12 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:10.450234 kernel: audit: type=1300 audit(1755046570.195:427): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd64d7f840 a2=0 a3=7ffd64d7f82c items=0 ppid=2438 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:10.457846 kernel: audit: type=1327 audit(1755046570.195:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:10.457913 kernel: audit: type=1325 audit(1755046570.215:428): table=nat:125 family=2 entries=36 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:10.457940 kernel: audit: type=1300 audit(1755046570.215:428): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd64d7f840 a2=0 a3=7ffd64d7f82c items=0 ppid=2438 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:10.475450 kernel: audit: type=1327 audit(1755046570.215:428): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:10.195000 audit[5490]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd64d7f840 a2=0 a3=7ffd64d7f82c items=0 ppid=2438 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:10.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:10.215000 audit[5490]: NETFILTER_CFG table=nat:125 family=2 entries=36 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:56:10.215000 audit[5490]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd64d7f840 a2=0 a3=7ffd64d7f82c items=0 ppid=2438 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:10.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:56:10.556670 env[1359]: time="2025-08-13T00:56:10.531594808Z" level=error msg="ExecSync for \"3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:13.253 [WARNING][5479] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" WorkloadEndpoint="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:13.433 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:13.439 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" iface="eth0" netns="" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:13.441 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:13.441 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:19.262 [INFO][5494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:19.428 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:19.443 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:20.434 [WARNING][5494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:20.437 [INFO][5494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" HandleID="k8s-pod-network.a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Workload="localhost-k8s-whisker--969784c8c--swhml-eth0" Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:20.453 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:20.580827 env[1359]: 2025-08-13 00:56:20.514 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538" Aug 13 00:56:20.580827 env[1359]: time="2025-08-13T00:56:20.570764635Z" level=info msg="TearDown network for sandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" successfully" Aug 13 00:56:20.757932 env[1359]: time="2025-08-13T00:56:20.618288435Z" level=info msg="RemovePodSandbox \"a3490b6f504c2936af539ce96dfed71afa5f3e884768438a2c75809b77dea538\" returns successfully" Aug 13 00:56:22.470170 kubelet[2292]: E0813 00:56:10.882376 2292 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Aug 13 00:56:22.717394 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:40106.service. Aug 13 00:56:22.744803 kernel: audit: type=1130 audit(1755046582.722:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:22.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:23.176000 audit[5503]: USER_ACCT pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.229244 kernel: audit: type=1101 audit(1755046583.176:430): pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.231733 kernel: audit: type=1103 audit(1755046583.185:431): pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.231775 kernel: audit: type=1006 audit(1755046583.185:432): pid=5503 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 00:56:23.234199 kernel: audit: type=1300 audit(1755046583.185:432): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6e999560 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:23.234230 kernel: audit: type=1327 audit(1755046583.185:432): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:23.185000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.185000 audit[5503]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6e999560 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:23.185000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:23.257359 sshd[5503]: Accepted publickey for core from 139.178.68.195 port 40106 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:56:23.193467 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:23.334427 systemd-logind[1347]: New session 10 of user core. Aug 13 00:56:23.339865 systemd[1]: Started session-10.scope. Aug 13 00:56:23.348891 kernel: audit: type=1105 audit(1755046583.342:433): pid=5503 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.342000 audit[5503]: USER_START pid=5503 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.347000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:23.352573 kernel: audit: type=1103 audit(1755046583.347:434): pid=5508 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:27.430369 env[1359]: time="2025-08-13T00:56:27.430302883Z" level=info msg="CreateContainer within sandbox \"db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:56:27.617942 env[1359]: time="2025-08-13T00:56:27.443272016Z" level=info msg="StopPodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\"" Aug 13 00:56:27.707688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232311474.mount: Deactivated successfully. Aug 13 00:56:27.739496 env[1359]: time="2025-08-13T00:56:27.712313795Z" level=info msg="CreateContainer within sandbox \"db10023637e28542aed5c78d84345ba7ce6e938e17042a06d249056d1922e5c4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d69484dd5506de778a348de15073995af137d59133e58efec5552c64c515cee6\"" Aug 13 00:56:32.780384 env[1359]: time="2025-08-13T00:56:32.776711551Z" level=error msg="ExecSync for \"3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Aug 13 00:56:37.685971 env[1359]: time="2025-08-13T00:56:37.668304419Z" level=error msg="ExecSync for \"4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Aug 13 00:56:40.156264 env[1359]: time="2025-08-13T00:56:40.153230032Z" level=info msg="shim disconnected" id=00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9 Aug 13 00:56:40.156264 env[1359]: time="2025-08-13T00:56:40.153265662Z" level=warning msg="cleaning up after shim disconnected" id=00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9 namespace=k8s.io Aug 13 00:56:40.156264 env[1359]: time="2025-08-13T00:56:40.153274065Z" level=info msg="cleaning up dead shim" Aug 13 00:56:40.156264 env[1359]: time="2025-08-13T00:56:40.159524555Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5599 runtime=io.containerd.runc.v2\n" Aug 13 00:56:40.235042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9-rootfs.mount: Deactivated successfully. Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:41.302 [WARNING][5561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q2fnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7dd644a-3303-4570-9359-66c16da8794d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96", Pod:"csi-node-driver-q2fnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib575ca47f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:41.526 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:41.533 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" iface="eth0" netns="" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:41.546 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:41.546 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.724 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.734 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.734 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.766 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.766 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.767 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:43.828260 env[1359]: 2025-08-13 00:56:43.792 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:43.828260 env[1359]: time="2025-08-13T00:56:43.828955511Z" level=info msg="TearDown network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" successfully" Aug 13 00:56:43.828260 env[1359]: time="2025-08-13T00:56:43.828986079Z" level=info msg="StopPodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" returns successfully" Aug 13 00:56:47.630447 kubelet[2292]: E0813 00:56:47.630396 2292 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4" cmd=["/health","-ready"] Aug 13 00:56:47.715881 kubelet[2292]: E0813 00:56:47.341438 2292 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f\": not found" podSandboxID="b85581a3ed873aa27877808bac720610bbb163eac24f554e56da52d19cfedc8f" Aug 13 00:56:47.750548 sshd[5503]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:47.770332 kubelet[2292]: E0813 00:56:47.770305 2292 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="3f255bffd0ccb499cb8b24ba2fce32c0421326bcc89b30c2d497d00d9cb3f8b4" cmd=["/health","-live"] Aug 13 00:56:47.770756 kubelet[2292]: E0813 00:56:47.770740 2292 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="4c03a0188c4576565a1674ef49cdd4981f3247bac1d774827e70dcbf7f9f30f0" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Aug 13 00:56:47.776000 audit[5503]: USER_END pid=5503 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:47.791852 kernel: audit: type=1106 audit(1755046607.776:435): pid=5503 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:47.798572 kernel: audit: type=1104 audit(1755046607.780:436): pid=5503 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:47.780000 audit[5503]: CRED_DISP pid=5503 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:47.805592 env[1359]: time="2025-08-13T00:56:47.805393949Z" level=info msg="StartContainer for \"d69484dd5506de778a348de15073995af137d59133e58efec5552c64c515cee6\"" Aug 13 00:56:47.807116 env[1359]: time="2025-08-13T00:56:47.806041615Z" level=info msg="RemovePodSandbox for \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\"" Aug 13 00:56:47.807116 env[1359]: time="2025-08-13T00:56:47.806057154Z" level=info msg="Forcibly stopping sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\"" Aug 13 00:56:47.806815 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:40106.service: Deactivated successfully. Aug 13 00:56:47.831880 kernel: audit: type=1131 audit(1755046607.808:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:47.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.105:22-139.178.68.195:40106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:47.818840 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:56:47.820888 systemd-logind[1347]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:56:47.878791 systemd-logind[1347]: Removed session 10. Aug 13 00:56:47.902958 systemd[1]: run-containerd-runc-k8s.io-cdc5b193cd998694992868b062a1b14fec6d3176601aab77cf6fadcaefce7376-runc.zl21vZ.mount: Deactivated successfully. Aug 13 00:56:47.948323 env[1359]: time="2025-08-13T00:56:47.948287966Z" level=info msg="StartContainer for \"d69484dd5506de778a348de15073995af137d59133e58efec5552c64c515cee6\" returns successfully" Aug 13 00:56:48.540110 kubelet[2292]: E0813 00:56:48.535959 2292 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m3.752s" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.072 [WARNING][5720] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q2fnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7dd644a-3303-4570-9359-66c16da8794d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96", Pod:"csi-node-driver-q2fnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib575ca47f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.119 [INFO][5720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.121 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" iface="eth0" netns="" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.123 [INFO][5720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.123 [INFO][5720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.886 [INFO][5750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.900 [INFO][5750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.902 [INFO][5750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.956 [WARNING][5750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.956 [INFO][5750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" HandleID="k8s-pod-network.c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Workload="localhost-k8s-csi--node--driver--q2fnk-eth0" Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.962 [INFO][5750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:50.018319 env[1359]: 2025-08-13 00:56:49.984 [INFO][5720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755" Aug 13 00:56:50.018319 env[1359]: time="2025-08-13T00:56:50.015025188Z" level=info msg="TearDown network for sandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" successfully" Aug 13 00:56:50.099361 env[1359]: time="2025-08-13T00:56:50.035109762Z" level=info msg="RemovePodSandbox \"c780a69a798faf55f788d2e577cf327b30682d938dbc9bdd9113f472e0563755\" returns successfully" Aug 13 00:56:51.015790 env[1359]: time="2025-08-13T00:56:51.015618218Z" level=info msg="StopPodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\"" Aug 13 00:56:51.015790 env[1359]: time="2025-08-13T00:56:51.015729415Z" level=info msg="CreateContainer within sandbox \"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:56:51.200800 env[1359]: time="2025-08-13T00:56:51.200705484Z" level=info msg="CreateContainer within sandbox \"2268f8f1ae7be1ca4f463ae244cfd9ef02731922bd8599221714cdce7fefcf96\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"401851bedfc14c9b0b506c90bb72880e2180371b0328d2e461b296ec6f43fc98\"" Aug 13 00:56:51.515447 env[1359]: time="2025-08-13T00:56:51.515316953Z" level=info msg="StartContainer for \"401851bedfc14c9b0b506c90bb72880e2180371b0328d2e461b296ec6f43fc98\"" Aug 13 00:56:51.679265 env[1359]: time="2025-08-13T00:56:51.679225788Z" level=info msg="StartContainer for \"401851bedfc14c9b0b506c90bb72880e2180371b0328d2e461b296ec6f43fc98\" returns successfully" Aug 13 00:56:51.910329 kubelet[2292]: I0813 00:56:51.905622 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cbf5687db-jtpwn" podStartSLOduration=104.19497636 podStartE2EDuration="2m7.394798377s" podCreationTimestamp="2025-08-13 00:54:44 +0000 UTC" firstStartedPulling="2025-08-13 00:55:17.979755696 +0000 UTC m=+46.636433355" lastFinishedPulling="2025-08-13 00:55:41.179577711 +0000 UTC m=+69.836255372" observedRunningTime="2025-08-13 00:56:47.66785873 +0000 UTC m=+136.324536392" watchObservedRunningTime="2025-08-13 00:56:51.394798377 +0000 UTC m=+140.051476041" Aug 13 00:56:51.967395 kubelet[2292]: I0813 00:56:51.949042 2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d77c8d6dd-fkw2g" podStartSLOduration=83.949025574 podStartE2EDuration="1m23.949025574s" podCreationTimestamp="2025-08-13 00:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:56:51.375945842 +0000 UTC m=+140.032623518" watchObservedRunningTime="2025-08-13 00:56:51.949025574 +0000 UTC m=+140.605703237" Aug 13 00:56:52.294514 kubelet[2292]: I0813 00:56:52.294490 2292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ec0c637-128e-47db-8784-76084717fd4b-calico-apiserver-certs\") pod \"5ec0c637-128e-47db-8784-76084717fd4b\" (UID: \"5ec0c637-128e-47db-8784-76084717fd4b\") " Aug 13 00:56:52.294767 kubelet[2292]: I0813 00:56:52.294753 2292 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8zj\" (UniqueName: \"kubernetes.io/projected/5ec0c637-128e-47db-8784-76084717fd4b-kube-api-access-ps8zj\") pod \"5ec0c637-128e-47db-8784-76084717fd4b\" (UID: \"5ec0c637-128e-47db-8784-76084717fd4b\") " Aug 13 00:56:52.297252 kubelet[2292]: E0813 00:56:52.297228 2292 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.451s" Aug 13 00:56:52.321807 kubelet[2292]: E0813 00:56:52.321775 2292 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod6bed0cdb-9639-4fbd-ab32-91d92304f2cb/d69484dd5506de778a348de15073995af137d59133e58efec5552c64c515cee6\": RecentStats: unable to find data in memory cache]" Aug 13 00:56:53.007227 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:60736.service. Aug 13 00:56:53.216613 kernel: audit: type=1130 audit(1755046613.049:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:60736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:53.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:60736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:56:54.327000 audit[5812]: USER_ACCT pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.364223 kernel: audit: type=1101 audit(1755046614.327:439): pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.365686 kernel: audit: type=1103 audit(1755046614.333:440): pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.368060 kernel: audit: type=1006 audit(1755046614.334:441): pid=5812 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 00:56:54.370228 kernel: audit: type=1300 audit(1755046614.334:441): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe62cfcf40 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:54.373313 kernel: audit: type=1327 audit(1755046614.334:441): proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:54.333000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.334000 audit[5812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe62cfcf40 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:56:54.334000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:56:54.392770 sshd[5812]: Accepted publickey for core from 139.178.68.195 port 60736 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:56:54.361299 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:54.459835 systemd-logind[1347]: New session 11 of user core. Aug 13 00:56:54.505763 kernel: audit: type=1105 audit(1755046614.485:442): pid=5812 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.505828 kernel: audit: type=1103 audit(1755046614.487:443): pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.485000 audit[5812]: USER_START pid=5812 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.487000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:56:54.476063 systemd[1]: var-lib-kubelet-pods-5ec0c637\x2d128e\x2d47db\x2d8784\x2d76084717fd4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dps8zj.mount: Deactivated successfully. Aug 13 00:56:54.476187 systemd[1]: var-lib-kubelet-pods-5ec0c637\x2d128e\x2d47db\x2d8784\x2d76084717fd4b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:56:54.477468 systemd[1]: Started session-11.scope. Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:53.017 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e691412-289d-4857-95a4-f28ceeef2595", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407", Pod:"coredns-7c65d6cfc9-p64qm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa60dbc83a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:53.182 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:53.182 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" iface="eth0" netns="" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:53.182 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:53.182 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.126 [INFO][5814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.141 [INFO][5814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.141 [INFO][5814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.319 [WARNING][5814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.319 [INFO][5814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.328 [INFO][5814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:56:55.395133 env[1359]: 2025-08-13 00:56:55.344 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:56:55.494418 env[1359]: time="2025-08-13T00:56:55.400205035Z" level=info msg="TearDown network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" successfully" Aug 13 00:56:55.494418 env[1359]: time="2025-08-13T00:56:55.400229268Z" level=info msg="StopPodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" returns successfully" Aug 13 00:56:58.241403 kubelet[2292]: I0813 00:56:57.834303 2292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec0c637-128e-47db-8784-76084717fd4b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "5ec0c637-128e-47db-8784-76084717fd4b" (UID: "5ec0c637-128e-47db-8784-76084717fd4b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:58.312869 kubelet[2292]: I0813 00:56:58.312829 2292 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ec0c637-128e-47db-8784-76084717fd4b-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:58.394413 kubelet[2292]: I0813 00:56:58.394356 2292 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec0c637-128e-47db-8784-76084717fd4b-kube-api-access-ps8zj" (OuterVolumeSpecName: "kube-api-access-ps8zj") pod "5ec0c637-128e-47db-8784-76084717fd4b" (UID: "5ec0c637-128e-47db-8784-76084717fd4b"). InnerVolumeSpecName "kube-api-access-ps8zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:58.527118 kubelet[2292]: I0813 00:56:58.527088 2292 scope.go:117] "RemoveContainer" containerID="00a6cffb1b34db4334c6878ce142aac2cce2154217aa33a00a31d674fdc39ba9" Aug 13 00:56:58.576933 kubelet[2292]: I0813 00:56:58.576908 2292 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8zj\" (UniqueName: \"kubernetes.io/projected/5ec0c637-128e-47db-8784-76084717fd4b-kube-api-access-ps8zj\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:59.700059 env[1359]: time="2025-08-13T00:56:59.700024409Z" level=info msg="RemovePodSandbox for \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\"" Aug 13 00:56:59.700059 env[1359]: time="2025-08-13T00:56:59.700050939Z" level=info msg="Forcibly stopping sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\"" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:00.499 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e691412-289d-4857-95a4-f28ceeef2595", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37b5443e5c0343370ca8d5bad950aa1e387d3a86e192eb92ec6c0e28ade56407", Pod:"coredns-7c65d6cfc9-p64qm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa60dbc83a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:00.506 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:00.506 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" iface="eth0" netns="" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:00.506 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:00.506 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.116 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.124 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.124 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.153 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.153 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" HandleID="k8s-pod-network.afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Workload="localhost-k8s-coredns--7c65d6cfc9--p64qm-eth0" Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.155 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:57:01.165189 env[1359]: 2025-08-13 00:57:01.161 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131" Aug 13 00:57:01.165189 env[1359]: time="2025-08-13T00:57:01.165181568Z" level=info msg="TearDown network for sandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" successfully" Aug 13 00:57:01.355119 env[1359]: time="2025-08-13T00:57:01.215899833Z" level=info msg="RemovePodSandbox \"afd78bd9301babf9eb92fa64aa86ef442e6532e85ede229ad5136fb09b32b131\" returns successfully" Aug 13 00:57:01.647894 sshd[5812]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:01.685000 audit[5812]: USER_END pid=5812 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:01.712782 kernel: audit: type=1106 audit(1755046621.685:444): pid=5812 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:01.718997 kernel: audit: type=1104 audit(1755046621.690:445): pid=5812 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:01.690000 audit[5812]: CRED_DISP pid=5812 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:01.748115 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:60736.service: Deactivated successfully. Aug 13 00:57:01.789925 kernel: audit: type=1131 audit(1755046621.772:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:60736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.105:22-139.178.68.195:60736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:01.786970 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:57:01.786981 systemd-logind[1347]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:57:01.805531 systemd-logind[1347]: Removed session 11. Aug 13 00:57:01.935091 env[1359]: time="2025-08-13T00:57:01.927675445Z" level=info msg="StopPodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\"" Aug 13 00:57:02.380274 kubelet[2292]: E0813 00:57:02.376607 2292 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.732s" Aug 13 00:57:02.507351 env[1359]: time="2025-08-13T00:57:02.507253256Z" level=info msg="CreateContainer within sandbox \"8e5e7258f63e452ae11b825b068e1ea109adc0862dd48f4ad3f05797791756f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:57:02.682223 kubelet[2292]: E0813 00:57:02.389496 2292 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock" failed. No retries permitted until 2025-08-13 00:57:02.820845861 +0000 UTC m=+151.477523521 (durationBeforeRetry 500ms). Error: RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock, err: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 13 00:57:02.760421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007674786.mount: Deactivated successfully. Aug 13 00:57:02.769094 env[1359]: time="2025-08-13T00:57:02.769066626Z" level=info msg="CreateContainer within sandbox \"8e5e7258f63e452ae11b825b068e1ea109adc0862dd48f4ad3f05797791756f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0d2a435be098fceb882e08aaca2a3785004b5b29b2e6c1a98e622a9181a4b72e\"" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:03.970 [WARNING][5880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a86fd1c-50b7-4722-86dc-208e2b22565d", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6", Pod:"calico-apiserver-7cbf5687db-l5vzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd2ff4ab659", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:04.023 [INFO][5880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:04.028 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" iface="eth0" netns="" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:04.028 [INFO][5880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:04.028 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.187 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.202 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.205 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.337 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.337 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.339 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:57:05.395992 env[1359]: 2025-08-13 00:57:05.358 [INFO][5880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:05.395992 env[1359]: time="2025-08-13T00:57:05.387301159Z" level=info msg="TearDown network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" successfully" Aug 13 00:57:05.395992 env[1359]: time="2025-08-13T00:57:05.387330722Z" level=info msg="StopPodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" returns successfully" Aug 13 00:57:06.301384 env[1359]: time="2025-08-13T00:57:06.301328745Z" level=info msg="StartContainer for \"0d2a435be098fceb882e08aaca2a3785004b5b29b2e6c1a98e622a9181a4b72e\"" Aug 13 00:57:06.417929 env[1359]: time="2025-08-13T00:57:06.417896005Z" level=info msg="StartContainer for \"0d2a435be098fceb882e08aaca2a3785004b5b29b2e6c1a98e622a9181a4b72e\" returns successfully" Aug 13 00:57:06.468373 env[1359]: time="2025-08-13T00:57:06.468334304Z" level=info msg="RemovePodSandbox for \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\"" Aug 13 00:57:06.468530 env[1359]: time="2025-08-13T00:57:06.468504816Z" level=info msg="Forcibly stopping sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\"" Aug 13 00:57:06.652620 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:59128.service. Aug 13 00:57:06.693359 kernel: audit: type=1130 audit(1755046626.655:447): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:06.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:06.959000 audit[5976]: USER_ACCT pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:06.971368 kernel: audit: type=1101 audit(1755046626.959:448): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:06.971523 kernel: audit: type=1103 audit(1755046626.966:449): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:06.971547 kernel: audit: type=1006 audit(1755046626.966:450): pid=5976 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Aug 13 00:57:06.966000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:06.977044 kernel: audit: type=1300 audit(1755046626.966:450): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8ba6ff50 a2=3 a3=0 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:06.979447 kernel: audit: type=1327 audit(1755046626.966:450): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:06.966000 audit[5976]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8ba6ff50 a2=3 a3=0 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:06.966000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:06.994492 sshd[5976]: Accepted publickey for core from 139.178.68.195 port 59128 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:57:06.973710 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:07.102215 systemd-logind[1347]: New session 12 of user core. Aug 13 00:57:07.114988 systemd[1]: Started session-12.scope. Aug 13 00:57:07.134636 kernel: audit: type=1105 audit(1755046627.117:451): pid=5976 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:07.134691 kernel: audit: type=1103 audit(1755046627.121:452): pid=5981 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:07.117000 audit[5976]: USER_START pid=5976 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:07.121000 audit[5981]: CRED_ACQ pid=5981 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:08.079 [WARNING][5972] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0", GenerateName:"calico-apiserver-7cbf5687db-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a86fd1c-50b7-4722-86dc-208e2b22565d", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbf5687db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3a293749f7c8d93e9b06a80cdf64b114f7bb5c0034e8c1b6d00738998687dd6", Pod:"calico-apiserver-7cbf5687db-l5vzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd2ff4ab659", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:08.158 [INFO][5972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:08.162 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" iface="eth0" netns="" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:08.166 [INFO][5972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:08.166 [INFO][5972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:09.564 [INFO][5994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:09.612 [INFO][5994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:09.612 [INFO][5994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:10.048 [WARNING][5994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:10.051 [INFO][5994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" HandleID="k8s-pod-network.f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Workload="localhost-k8s-calico--apiserver--7cbf5687db--l5vzj-eth0" Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:10.079 [INFO][5994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:57:10.207002 env[1359]: 2025-08-13 00:57:10.139 [INFO][5972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe" Aug 13 00:57:10.207002 env[1359]: time="2025-08-13T00:57:10.207009216Z" level=info msg="TearDown network for sandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" successfully" Aug 13 00:57:10.767214 env[1359]: time="2025-08-13T00:57:10.426957946Z" level=info msg="RemovePodSandbox \"f65906834c0db886e24364667b5998fd917cb752942368de908e209c9a42aafe\" returns successfully" Aug 13 00:57:26.566633 sshd[5976]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:26.679916 kernel: audit: type=1106 audit(1755046646.625:453): pid=5976 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.688331 kernel: audit: type=1130 audit(1755046646.627:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:45848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:26.689661 kernel: audit: type=1104 audit(1755046646.629:455): pid=5976 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.689707 kernel: audit: type=1131 audit(1755046646.633:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:26.625000 audit[5976]: USER_END pid=5976 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:45848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:26.629000 audit[5976]: CRED_DISP pid=5976 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.105:22-139.178.68.195:59128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:26.627617 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:45848.service. Aug 13 00:57:26.633872 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:59128.service: Deactivated successfully. Aug 13 00:57:26.634626 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:57:26.636126 systemd-logind[1347]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:57:26.636749 systemd-logind[1347]: Removed session 12. Aug 13 00:57:26.775000 audit[6006]: USER_ACCT pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.778444 sshd[6006]: Accepted publickey for core from 139.178.68.195 port 45848 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:57:26.780582 kernel: audit: type=1101 audit(1755046646.775:457): pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.780000 audit[6006]: CRED_ACQ pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.786174 kernel: audit: type=1103 audit(1755046646.780:458): pid=6006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.786239 kernel: audit: type=1006 audit(1755046646.780:459): pid=6006 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Aug 13 00:57:26.786266 kernel: audit: type=1300 audit(1755046646.780:459): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6f477150 a2=3 a3=0 items=0 ppid=1 pid=6006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:26.780000 audit[6006]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6f477150 a2=3 a3=0 items=0 ppid=1 pid=6006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:26.793738 kernel: audit: type=1327 audit(1755046646.780:459): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:26.780000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:26.790379 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:26.800737 systemd[1]: Started session-13.scope. Aug 13 00:57:26.801638 systemd-logind[1347]: New session 13 of user core. Aug 13 00:57:26.806000 audit[6006]: USER_START pid=6006 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.808000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:26.811602 kernel: audit: type=1105 audit(1755046646.806:460): pid=6006 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:37.137000 audit[6027]: NETFILTER_CFG table=filter:126 family=2 entries=12 op=nft_register_rule pid=6027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.321709 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:57:37.359269 kernel: audit: type=1325 audit(1755046657.137:462): table=filter:126 family=2 entries=12 op=nft_register_rule pid=6027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.364855 kernel: audit: type=1300 audit(1755046657.137:462): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd119f2a40 a2=0 a3=7ffd119f2a2c items=0 ppid=2438 pid=6027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.364897 kernel: audit: type=1327 audit(1755046657.137:462): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.370751 kernel: audit: type=1325 audit(1755046657.151:463): table=nat:127 family=2 entries=72 op=nft_unregister_chain pid=6027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.381747 kernel: audit: type=1300 audit(1755046657.151:463): arch=c000003e syscall=46 success=yes exit=20044 a0=3 a1=7ffd119f2a40 a2=0 a3=7ffd119f2a2c items=0 ppid=2438 pid=6027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.381799 kernel: audit: type=1327 audit(1755046657.151:463): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.381817 kernel: audit: type=1325 audit(1755046657.229:464): table=filter:128 family=2 entries=12 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.381847 kernel: audit: type=1300 audit(1755046657.229:464): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffef52633c0 a2=0 a3=7ffef52633ac items=0 ppid=2438 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.389147 kernel: audit: type=1327 audit(1755046657.229:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.389180 kernel: audit: type=1325 audit(1755046657.238:465): table=nat:129 family=2 entries=22 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.137000 audit[6027]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd119f2a40 a2=0 a3=7ffd119f2a2c items=0 ppid=2438 pid=6027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.151000 audit[6027]: NETFILTER_CFG table=nat:127 family=2 entries=72 op=nft_unregister_chain pid=6027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.151000 audit[6027]: SYSCALL arch=c000003e syscall=46 success=yes exit=20044 a0=3 a1=7ffd119f2a40 a2=0 a3=7ffd119f2a2c items=0 ppid=2438 pid=6027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.229000 audit[6030]: NETFILTER_CFG table=filter:128 family=2 entries=12 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.229000 audit[6030]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffef52633c0 a2=0 a3=7ffef52633ac items=0 ppid=2438 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:37.238000 audit[6030]: NETFILTER_CFG table=nat:129 family=2 entries=22 op=nft_register_rule pid=6030 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:57:37.238000 audit[6030]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffef52633c0 a2=0 a3=7ffef52633ac items=0 ppid=2438 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:37.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:57:44.906387 sshd[6006]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:45.074553 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 00:57:45.101949 kernel: audit: type=1106 audit(1755046664.995:466): pid=6006 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.106088 kernel: audit: type=1104 audit(1755046664.999:467): pid=6006 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.108914 kernel: audit: type=1130 audit(1755046665.005:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:45.112445 kernel: audit: type=1131 audit(1755046665.025:469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:45848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:44.995000 audit[6006]: USER_END pid=6006 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:44.999000 audit[6006]: CRED_DISP pid=6006 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:45.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.105:22-139.178.68.195:45848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:45.005574 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:50630.service. Aug 13 00:57:45.026369 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:45848.service: Deactivated successfully. Aug 13 00:57:45.027704 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:57:45.028112 systemd-logind[1347]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:57:45.028713 systemd-logind[1347]: Removed session 13. Aug 13 00:57:45.469000 audit[6052]: USER_ACCT pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.496636 kernel: audit: type=1101 audit(1755046665.469:470): pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.512655 kernel: audit: type=1103 audit(1755046665.474:471): pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.516706 kernel: audit: type=1006 audit(1755046665.474:472): pid=6052 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 13 00:57:45.520130 kernel: audit: type=1300 audit(1755046665.474:472): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1fe36cc0 a2=3 a3=0 items=0 ppid=1 pid=6052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:45.524881 kernel: audit: type=1327 audit(1755046665.474:472): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:45.474000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.474000 audit[6052]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1fe36cc0 a2=3 a3=0 items=0 ppid=1 pid=6052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:45.474000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:45.564991 sshd[6052]: Accepted publickey for core from 139.178.68.195 port 50630 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:57:45.497959 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:45.600254 systemd-logind[1347]: New session 14 of user core. Aug 13 00:57:45.607687 systemd[1]: Started session-14.scope. Aug 13 00:57:45.628125 kernel: audit: type=1105 audit(1755046665.622:473): pid=6052 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.622000 audit[6052]: USER_START pid=6052 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:45.648000 audit[6057]: CRED_ACQ pid=6057 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:50.716660 kubelet[2292]: E0813 00:57:50.707785 2292 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Aug 13 00:57:59.724530 sshd[6052]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:59.829420 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:57:59.836845 kernel: audit: type=1106 audit(1755046679.773:475): pid=6052 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.838656 kernel: audit: type=1104 audit(1755046679.778:476): pid=6052 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.838696 kernel: audit: type=1130 audit(1755046679.785:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:45624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:59.838714 kernel: audit: type=1131 audit(1755046679.786:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:59.773000 audit[6052]: USER_END pid=6052 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.778000 audit[6052]: CRED_DISP pid=6052 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:45624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:59.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.105:22-139.178.68.195:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:57:59.787398 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:45624.service. Aug 13 00:57:59.787855 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:50630.service: Deactivated successfully. Aug 13 00:57:59.795353 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:57:59.795426 systemd-logind[1347]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:57:59.799964 systemd-logind[1347]: Removed session 14. Aug 13 00:57:59.901000 audit[6075]: USER_ACCT pid=6075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.907022 kernel: audit: type=1101 audit(1755046679.901:479): pid=6075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.907055 sshd[6075]: Accepted publickey for core from 139.178.68.195 port 45624 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:57:59.916933 kernel: audit: type=1103 audit(1755046679.906:480): pid=6075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.916982 kernel: audit: type=1006 audit(1755046679.906:481): pid=6075 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 00:57:59.917005 kernel: audit: type=1300 audit(1755046679.906:481): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeef2dbe30 a2=3 a3=0 items=0 ppid=1 pid=6075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:59.906000 audit[6075]: CRED_ACQ pid=6075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.906000 audit[6075]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeef2dbe30 a2=3 a3=0 items=0 ppid=1 pid=6075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:57:59.919892 kernel: audit: type=1327 audit(1755046679.906:481): proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:59.906000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:57:59.916922 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:59.925485 systemd-logind[1347]: New session 15 of user core. Aug 13 00:57:59.925878 systemd[1]: Started session-15.scope. Aug 13 00:57:59.938447 kernel: audit: type=1105 audit(1755046679.931:482): pid=6075 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.931000 audit[6075]: USER_START pid=6075 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:57:59.936000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:00.974834 sshd[6075]: pam_unix(sshd:session): session closed for user core Aug 13 00:58:01.051000 audit[6075]: USER_END pid=6075 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:01.056000 audit[6075]: CRED_DISP pid=6075 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:01.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:01.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.105:22-139.178.68.195:45624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:58:01.067441 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:54474.service. Aug 13 00:58:01.087054 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:45624.service: Deactivated successfully. Aug 13 00:58:01.091751 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:58:01.091799 systemd-logind[1347]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:58:01.094277 systemd-logind[1347]: Removed session 15. Aug 13 00:58:01.413000 audit[6085]: USER_ACCT pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:01.416000 audit[6085]: CRED_ACQ pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:01.416000 audit[6085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4f3e8f50 a2=3 a3=0 items=0 ppid=1 pid=6085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:58:01.416000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:58:01.442012 sshd[6085]: Accepted publickey for core from 139.178.68.195 port 54474 ssh2: RSA SHA256:D9fG+3NI27jZdcTgqPkKAyN2+BKarYhwuSKj47TtA0s Aug 13 00:58:01.435130 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:58:01.464079 systemd-logind[1347]: New session 16 of user core. Aug 13 00:58:01.464382 systemd[1]: Started session-16.scope. Aug 13 00:58:01.466000 audit[6085]: USER_START pid=6085 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:01.467000 audit[6090]: CRED_ACQ pid=6090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:58:50.531961 update_engine[1349]: I0813 00:58:49.047119 1349 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:58:52.283546 1349 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.315078 1349 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.550019 1349 omaha_request_params.cc:62] Current group set to lts Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.769914 1349 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.769932 1349 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.784541 1349 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.885038 1349 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.901589 1349 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.901602 1349 omaha_request_action.cc:271] Request: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: Aug 13 00:59:02.901664 update_engine[1349]: I0813 00:59:02.901605 1349 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:59:06.608778 kernel: kauditd_printk_skb: 12 callbacks suppressed Aug 13 00:59:06.650839 kernel: audit: type=1106 audit(1755046744.824:493): pid=6085 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.688198 kernel: audit: type=1104 audit(1755046745.033:494): pid=6085 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.697183 kernel: audit: type=1131 audit(1755046746.132:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:04.824000 audit[6085]: USER_END pid=6085 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:05.033000 audit[6085]: CRED_DISP pid=6085 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:59:06.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.105:22-139.178.68.195:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:07.000117 update_engine[1349]: I0813 00:59:04.192739 1349 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:59:07.000117 update_engine[1349]: E0813 00:59:04.529479 1349 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:59:07.000117 update_engine[1349]: I0813 00:59:04.824728 1349 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:59:07.277339 locksmithd[1407]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:59:03.705703 sshd[6085]: pam_unix(sshd:session): session closed for user core Aug 13 00:59:06.052498 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:54474.service: Deactivated successfully. Aug 13 00:59:06.293392 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:59:06.315611 systemd-logind[1347]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:59:06.710613 systemd-logind[1347]: Removed session 16. Aug 13 00:59:08.387474 env[1359]: time="2025-08-13T00:59:08.387434056Z" level=info msg="shim disconnected" id=63d0164770201e4d594899f3a1fc36f78cc79946c3f39bbcfaddb5a4c5c8895b Aug 13 00:59:08.387474 env[1359]: time="2025-08-13T00:59:08.387470144Z" level=warning msg="cleaning up after shim disconnected" id=63d0164770201e4d594899f3a1fc36f78cc79946c3f39bbcfaddb5a4c5c8895b namespace=k8s.io Aug 13 00:59:08.387474 env[1359]: time="2025-08-13T00:59:08.387479101Z" level=info msg="cleaning up dead shim" Aug 13 00:59:08.763667 env[1359]: time="2025-08-13T00:59:08.399393147Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:59:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6134 runtime=io.containerd.runc.v2\n"