Feb 9 19:50:37.649983 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:50:37.649998 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:50:37.650004 kernel: Disabled fast string operations Feb 9 19:50:37.650008 kernel: BIOS-provided physical RAM map: Feb 9 19:50:37.650012 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 9 19:50:37.650016 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 9 19:50:37.650021 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 9 19:50:37.650025 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 9 19:50:37.650029 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 9 19:50:37.650033 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 9 19:50:37.650037 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 9 19:50:37.650041 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 9 19:50:37.650045 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 9 19:50:37.650049 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 19:50:37.650055 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 9 19:50:37.650059 kernel: NX (Execute Disable) protection: active Feb 9 19:50:37.650064 kernel: SMBIOS 2.7 present. Feb 9 19:50:37.650068 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 9 19:50:37.650072 kernel: vmware: hypercall mode: 0x00 Feb 9 19:50:37.650077 kernel: Hypervisor detected: VMware Feb 9 19:50:37.650082 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 9 19:50:37.650086 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 9 19:50:37.650090 kernel: vmware: using clock offset of 2744107150 ns Feb 9 19:50:37.650095 kernel: tsc: Detected 3408.000 MHz processor Feb 9 19:50:37.650099 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:50:37.650104 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:50:37.650109 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 9 19:50:37.650113 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:50:37.650117 kernel: total RAM covered: 3072M Feb 9 19:50:37.650123 kernel: Found optimal setting for mtrr clean up Feb 9 19:50:37.650128 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 9 19:50:37.650132 kernel: Using GB pages for direct mapping Feb 9 19:50:37.650137 kernel: ACPI: Early table checksum verification disabled Feb 9 19:50:37.650141 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 9 19:50:37.650145 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 9 19:50:37.650150 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 9 19:50:37.650154 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 9 19:50:37.650159 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 19:50:37.650163 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 19:50:37.650169 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 9 19:50:37.650175 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 9 19:50:37.650180 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 9 19:50:37.650185 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 9 19:50:37.650190 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 9 19:50:37.650195 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 9 19:50:37.650200 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 9 19:50:37.650205 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 9 19:50:37.650210 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 19:50:37.650215 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 19:50:37.650219 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 9 19:50:37.650224 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 9 19:50:37.650229 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 9 19:50:37.650234 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 9 19:50:37.650239 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 9 19:50:37.650244 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 9 19:50:37.650249 kernel: system APIC only can use physical flat Feb 9 19:50:37.650254 kernel: Setting APIC routing to physical flat. Feb 9 19:50:37.650258 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:50:37.650263 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 9 19:50:37.650268 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 9 19:50:37.650273 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 9 19:50:37.650278 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 9 19:50:37.650283 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 9 19:50:37.650288 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 9 19:50:37.650293 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 9 19:50:37.650297 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 9 19:50:37.650302 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 9 19:50:37.650307 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 9 19:50:37.650311 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 9 19:50:37.650316 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 9 19:50:37.650321 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 9 19:50:37.650325 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 9 19:50:37.650331 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 9 19:50:37.650336 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 9 19:50:37.650340 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 9 19:50:37.650345 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 9 19:50:37.650350 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 9 19:50:37.650354 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 9 19:50:37.650359 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 9 19:50:37.650364 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 9 19:50:37.650368 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 9 19:50:37.650373 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 9 19:50:37.650379 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 9 19:50:37.650383 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 9 19:50:37.650388 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 9 19:50:37.650392 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 9 19:50:37.650397 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 9 19:50:37.650402 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 9 19:50:37.650407 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 9 19:50:37.650426 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 9 19:50:37.650430 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 9 19:50:37.650435 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 9 19:50:37.650440 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 9 19:50:37.650445 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 9 19:50:37.650450 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 9 19:50:37.650454 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 9 19:50:37.650459 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 9 19:50:37.650463 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 9 19:50:37.650468 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 9 19:50:37.650472 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 9 19:50:37.650477 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 9 19:50:37.650481 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 9 19:50:37.650495 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 9 19:50:37.654513 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 9 19:50:37.654520 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 9 19:50:37.654525 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 9 19:50:37.654530 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 9 19:50:37.654534 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 9 19:50:37.654539 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 9 19:50:37.654544 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 9 19:50:37.654549 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 9 19:50:37.654553 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 9 19:50:37.654561 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 9 19:50:37.654565 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 9 19:50:37.654570 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 9 19:50:37.654575 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 9 19:50:37.654580 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 9 19:50:37.654586 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 9 19:50:37.654593 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 9 19:50:37.654599 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 9 19:50:37.654604 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 9 19:50:37.654609 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 9 19:50:37.654615 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 9 19:50:37.654620 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 9 19:50:37.654625 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 9 19:50:37.654630 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 9 19:50:37.654635 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 9 19:50:37.654640 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 9 19:50:37.654645 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 9 19:50:37.654650 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 9 19:50:37.654656 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 9 19:50:37.654661 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 9 19:50:37.654666 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 9 19:50:37.654671 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 9 19:50:37.654676 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 9 19:50:37.654681 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 9 19:50:37.654686 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 9 19:50:37.654691 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 9 19:50:37.654696 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 9 19:50:37.654701 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 9 19:50:37.654707 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 9 19:50:37.654712 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 9 19:50:37.654717 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 9 19:50:37.654722 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 9 19:50:37.654727 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 9 19:50:37.654732 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 9 19:50:37.654737 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 9 19:50:37.654742 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 9 19:50:37.654747 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 9 19:50:37.654752 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 9 19:50:37.654758 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 9 19:50:37.654763 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 9 19:50:37.654768 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 9 19:50:37.654773 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 9 19:50:37.654778 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 9 19:50:37.654783 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 9 19:50:37.654788 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 9 19:50:37.654793 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 9 19:50:37.654798 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 9 19:50:37.654804 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 9 19:50:37.654809 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 9 19:50:37.654814 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 9 19:50:37.654819 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 9 19:50:37.654824 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 9 19:50:37.654829 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 9 19:50:37.654834 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 9 19:50:37.654839 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 9 19:50:37.654844 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 9 19:50:37.654849 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 9 19:50:37.654855 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 9 19:50:37.654859 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 9 19:50:37.654865 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 9 19:50:37.654870 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 9 19:50:37.654875 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 9 19:50:37.654880 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 9 19:50:37.654884 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 9 19:50:37.654889 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 9 19:50:37.654894 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 9 19:50:37.654899 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 9 19:50:37.654905 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 9 19:50:37.654910 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 9 19:50:37.654915 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 9 19:50:37.654920 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 9 19:50:37.654925 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 9 19:50:37.654930 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 9 19:50:37.654935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 19:50:37.654941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 9 19:50:37.654946 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 9 19:50:37.654952 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 9 19:50:37.654958 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 9 19:50:37.654963 kernel: Zone ranges: Feb 9 19:50:37.654968 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:50:37.654974 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 9 19:50:37.654979 kernel: Normal empty Feb 9 19:50:37.654984 kernel: Movable zone start for each node Feb 9 19:50:37.654989 kernel: Early memory node ranges Feb 9 19:50:37.654994 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 9 19:50:37.654999 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 9 19:50:37.655005 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 9 19:50:37.655010 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 9 19:50:37.655015 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:50:37.655020 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 9 19:50:37.655025 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 9 19:50:37.655030 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 9 19:50:37.655035 kernel: system APIC only can use physical flat Feb 9 19:50:37.655041 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 9 19:50:37.655046 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 19:50:37.655052 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 19:50:37.655057 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 19:50:37.655062 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 19:50:37.655067 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 19:50:37.655072 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 19:50:37.655077 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 19:50:37.655082 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 19:50:37.655087 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 19:50:37.655092 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 19:50:37.655097 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 19:50:37.655103 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 19:50:37.655108 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 19:50:37.655113 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 19:50:37.655118 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 19:50:37.655123 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 19:50:37.655128 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 9 19:50:37.655133 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 9 19:50:37.655138 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 9 19:50:37.655143 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 9 19:50:37.655148 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 9 19:50:37.655154 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 9 19:50:37.655158 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 9 19:50:37.655163 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 9 19:50:37.655169 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 9 19:50:37.655174 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 9 19:50:37.655179 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 9 19:50:37.655184 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 9 19:50:37.655189 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 9 19:50:37.655194 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 9 19:50:37.655199 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 9 19:50:37.655204 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 9 19:50:37.655209 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 9 19:50:37.655214 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 9 19:50:37.655219 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 9 19:50:37.655224 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 9 19:50:37.655229 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 9 19:50:37.655234 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 9 19:50:37.655239 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 9 19:50:37.655245 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 9 19:50:37.655250 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 9 19:50:37.655255 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 9 19:50:37.655260 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 9 19:50:37.655265 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 9 19:50:37.655270 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 9 19:50:37.655275 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 9 19:50:37.655280 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 9 19:50:37.655285 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 9 19:50:37.655290 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 9 19:50:37.655296 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 9 19:50:37.655301 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 9 19:50:37.655306 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 9 19:50:37.655311 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 9 19:50:37.655316 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 9 19:50:37.655321 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 9 19:50:37.655326 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 9 19:50:37.655331 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 9 19:50:37.655336 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 9 19:50:37.655342 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 9 19:50:37.655347 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 9 19:50:37.655352 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 9 19:50:37.655357 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 9 19:50:37.655362 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 9 19:50:37.655367 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 9 19:50:37.655372 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 9 19:50:37.655377 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 9 19:50:37.655382 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 9 19:50:37.655387 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 9 19:50:37.655393 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 9 19:50:37.655398 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 9 19:50:37.655403 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 9 19:50:37.655408 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 9 19:50:37.655413 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 9 19:50:37.655418 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 9 19:50:37.655423 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 9 19:50:37.655428 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 9 19:50:37.655433 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 9 19:50:37.655439 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 9 19:50:37.655444 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 9 19:50:37.655449 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 9 19:50:37.655454 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 9 19:50:37.655459 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 9 19:50:37.655464 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 9 19:50:37.655469 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 9 19:50:37.655474 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 9 19:50:37.655479 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 9 19:50:37.655499 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 9 19:50:37.655505 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 9 19:50:37.655510 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 9 19:50:37.655515 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 9 19:50:37.655520 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 9 19:50:37.655525 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 9 19:50:37.655530 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 9 19:50:37.655535 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 9 19:50:37.655540 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 9 19:50:37.655545 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 9 19:50:37.655552 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 9 19:50:37.655557 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 9 19:50:37.655562 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 9 19:50:37.655567 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 9 19:50:37.655572 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 9 19:50:37.655577 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 9 19:50:37.655582 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 9 19:50:37.655587 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 9 19:50:37.655592 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 9 19:50:37.655597 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 9 19:50:37.655603 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 9 19:50:37.655608 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 9 19:50:37.655613 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 9 19:50:37.655618 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 9 19:50:37.655623 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 9 19:50:37.655628 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 9 19:50:37.655633 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 9 19:50:37.655638 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 9 19:50:37.655643 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 9 19:50:37.655649 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 9 19:50:37.655654 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 9 19:50:37.655659 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 9 19:50:37.655664 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 9 19:50:37.655669 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 9 19:50:37.655674 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 9 19:50:37.655679 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 9 19:50:37.655684 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 9 19:50:37.655689 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 9 19:50:37.655695 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 9 19:50:37.655700 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 9 19:50:37.655705 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 9 19:50:37.655710 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:50:37.655715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 9 19:50:37.655721 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:50:37.655726 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 9 19:50:37.655731 kernel: TSC deadline timer available Feb 9 19:50:37.655736 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 9 19:50:37.655741 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 9 19:50:37.655747 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 9 19:50:37.655753 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:50:37.655758 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Feb 9 19:50:37.655763 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 19:50:37.655768 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 19:50:37.655773 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 9 19:50:37.655778 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 9 19:50:37.655783 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 9 19:50:37.655789 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 9 19:50:37.655794 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 9 19:50:37.655799 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 9 19:50:37.655804 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 9 19:50:37.655815 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 9 19:50:37.655821 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 9 19:50:37.655827 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 9 19:50:37.655832 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 9 19:50:37.655837 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 9 19:50:37.655844 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 9 19:50:37.655849 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 9 19:50:37.655854 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 9 19:50:37.655859 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 9 19:50:37.655864 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 9 19:50:37.655870 kernel: Policy zone: DMA32 Feb 9 19:50:37.655876 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:50:37.655882 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:50:37.655888 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 9 19:50:37.655894 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 9 19:50:37.655899 kernel: printk: log_buf_len min size: 262144 bytes Feb 9 19:50:37.655904 kernel: printk: log_buf_len: 1048576 bytes Feb 9 19:50:37.655910 kernel: printk: early log buf free: 239728(91%) Feb 9 19:50:37.655915 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:50:37.655922 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:50:37.655927 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:50:37.655933 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 153416K reserved, 0K cma-reserved) Feb 9 19:50:37.655939 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 9 19:50:37.655945 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:50:37.655950 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:50:37.655960 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:50:37.655966 kernel: rcu: RCU event tracing is enabled. Feb 9 19:50:37.655972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 9 19:50:37.655978 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:50:37.655983 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:50:37.655989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:50:37.655994 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 9 19:50:37.656000 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 9 19:50:37.656005 kernel: random: crng init done Feb 9 19:50:37.656010 kernel: Console: colour VGA+ 80x25 Feb 9 19:50:37.656016 kernel: printk: console [tty0] enabled Feb 9 19:50:37.656021 kernel: printk: console [ttyS0] enabled Feb 9 19:50:37.656028 kernel: ACPI: Core revision 20210730 Feb 9 19:50:37.656033 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 9 19:50:37.656039 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:50:37.656044 kernel: x2apic enabled Feb 9 19:50:37.656049 kernel: Switched APIC routing to physical x2apic. Feb 9 19:50:37.656055 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:50:37.656060 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 19:50:37.656066 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 9 19:50:37.656071 kernel: Disabled fast string operations Feb 9 19:50:37.656078 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:50:37.656083 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:50:37.656088 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:50:37.656094 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:50:37.656100 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 19:50:37.656106 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:50:37.656112 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 19:50:37.656117 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 19:50:37.656123 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:50:37.656129 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:50:37.656134 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:50:37.656140 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 9 19:50:37.656145 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:50:37.656151 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:50:37.656156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:50:37.656162 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:50:37.656167 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:50:37.656173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 9 19:50:37.656179 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:50:37.656184 kernel: pid_max: default: 131072 minimum: 1024 Feb 9 19:50:37.656190 kernel: LSM: Security Framework initializing Feb 9 19:50:37.656195 kernel: SELinux: Initializing. Feb 9 19:50:37.656201 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:50:37.656206 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:50:37.656212 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 19:50:37.656217 kernel: Performance Events: Skylake events, core PMU driver. Feb 9 19:50:37.656224 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 9 19:50:37.656229 kernel: core: CPUID marked event: 'instructions' unavailable Feb 9 19:50:37.656235 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 9 19:50:37.656240 kernel: core: CPUID marked event: 'cache references' unavailable Feb 9 19:50:37.656245 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 9 19:50:37.656250 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 9 19:50:37.656256 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 9 19:50:37.656261 kernel: ... version: 1 Feb 9 19:50:37.656266 kernel: ... bit width: 48 Feb 9 19:50:37.656272 kernel: ... generic registers: 4 Feb 9 19:50:37.656278 kernel: ... value mask: 0000ffffffffffff Feb 9 19:50:37.656283 kernel: ... max period: 000000007fffffff Feb 9 19:50:37.656289 kernel: ... fixed-purpose events: 0 Feb 9 19:50:37.656294 kernel: ... event mask: 000000000000000f Feb 9 19:50:37.656300 kernel: signal: max sigframe size: 1776 Feb 9 19:50:37.656306 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:50:37.656311 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:50:37.656317 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:50:37.656323 kernel: x86: Booting SMP configuration: Feb 9 19:50:37.656328 kernel: .... node #0, CPUs: #1 Feb 9 19:50:37.656334 kernel: Disabled fast string operations Feb 9 19:50:37.656339 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 9 19:50:37.656345 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 9 19:50:37.656350 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:50:37.656355 kernel: smpboot: Max logical packages: 128 Feb 9 19:50:37.656361 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 9 19:50:37.656366 kernel: devtmpfs: initialized Feb 9 19:50:37.656372 kernel: x86/mm: Memory block size: 128MB Feb 9 19:50:37.656378 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 9 19:50:37.656384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:50:37.656389 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 9 19:50:37.656395 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:50:37.656400 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:50:37.656406 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:50:37.656411 kernel: audit: type=2000 audit(1707508236.057:1): state=initialized audit_enabled=0 res=1 Feb 9 19:50:37.656416 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:50:37.656422 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:50:37.656428 kernel: cpuidle: using governor menu Feb 9 19:50:37.656433 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 9 19:50:37.656439 kernel: ACPI: bus type PCI registered Feb 9 19:50:37.656444 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:50:37.656449 kernel: dca service started, version 1.12.1 Feb 9 19:50:37.656455 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 9 19:50:37.656460 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Feb 9 19:50:37.656466 kernel: PCI: Using configuration type 1 for base access Feb 9 19:50:37.656471 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:50:37.656478 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:50:37.656489 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:50:37.656496 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:50:37.656501 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:50:37.656506 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:50:37.656512 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:50:37.656517 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:50:37.656523 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:50:37.656528 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:50:37.656535 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:50:37.656540 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 19:50:37.656546 kernel: ACPI: Interpreter enabled Feb 9 19:50:37.656551 kernel: ACPI: PM: (supports S0 S1 S5) Feb 9 19:50:37.656557 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:50:37.656562 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:50:37.656568 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 9 19:50:37.656573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 9 19:50:37.656650 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:50:37.656701 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 9 19:50:37.656745 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 9 19:50:37.656753 kernel: PCI host bridge to bus 0000:00 Feb 9 19:50:37.656799 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:50:37.656839 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Feb 9 19:50:37.656878 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Feb 9 19:50:37.656919 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Feb 9 19:50:37.656957 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Feb 9 19:50:37.656996 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 9 19:50:37.657036 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:50:37.657074 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 9 19:50:37.657112 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 9 19:50:37.657201 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 9 19:50:37.660708 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 9 19:50:37.660769 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 9 19:50:37.660823 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 9 19:50:37.660871 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 9 19:50:37.660917 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:50:37.660965 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:50:37.661014 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:50:37.661060 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:50:37.661110 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 9 19:50:37.661157 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 9 19:50:37.661261 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 9 19:50:37.661321 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 9 19:50:37.661369 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 9 19:50:37.661420 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 9 19:50:37.661472 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 9 19:50:37.661542 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 9 19:50:37.661603 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 9 19:50:37.661648 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 9 19:50:37.661691 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 9 19:50:37.661735 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:50:37.661786 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 9 19:50:37.661840 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.661886 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.661937 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.661986 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662177 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662238 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662294 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662345 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662398 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662450 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662521 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662575 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662628 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662679 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662731 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662781 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662834 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.662886 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.662964 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663030 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663083 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663133 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663187 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663239 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663291 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663341 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663395 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663445 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663509 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663565 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663618 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663668 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663719 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663770 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663823 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663875 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.663949 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.663999 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.664052 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.664118 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.664170 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.664221 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.664274 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.664323 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.664376 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.664425 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.664478 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.664552 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.665567 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.665629 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.665688 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.665741 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.665796 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.665847 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.665905 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.665956 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.666010 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.666061 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.666115 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.666165 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.666221 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.666272 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.666326 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:50:37.666376 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.666433 kernel: pci_bus 0000:01: extended config space not accessible Feb 9 19:50:37.666491 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 19:50:37.666548 kernel: pci_bus 0000:02: extended config space not accessible Feb 9 19:50:37.666557 kernel: acpiphp: Slot [32] registered Feb 9 19:50:37.666563 kernel: acpiphp: Slot [33] registered Feb 9 19:50:37.666569 kernel: acpiphp: Slot [34] registered Feb 9 19:50:37.666575 kernel: acpiphp: Slot [35] registered Feb 9 19:50:37.666580 kernel: acpiphp: Slot [36] registered Feb 9 19:50:37.666586 kernel: acpiphp: Slot [37] registered Feb 9 19:50:37.666592 kernel: acpiphp: Slot [38] registered Feb 9 19:50:37.666597 kernel: acpiphp: Slot [39] registered Feb 9 19:50:37.666604 kernel: acpiphp: Slot [40] registered Feb 9 19:50:37.666610 kernel: acpiphp: Slot [41] registered Feb 9 19:50:37.666615 kernel: acpiphp: Slot [42] registered Feb 9 19:50:37.666621 kernel: acpiphp: Slot [43] registered Feb 9 19:50:37.666627 kernel: acpiphp: Slot [44] registered Feb 9 19:50:37.666633 kernel: acpiphp: Slot [45] registered Feb 9 19:50:37.666638 kernel: acpiphp: Slot [46] registered Feb 9 19:50:37.666644 kernel: acpiphp: Slot [47] registered Feb 9 19:50:37.666649 kernel: acpiphp: Slot [48] registered Feb 9 19:50:37.666656 kernel: acpiphp: Slot [49] registered Feb 9 19:50:37.666662 kernel: acpiphp: Slot [50] registered Feb 9 19:50:37.666667 kernel: acpiphp: Slot [51] registered Feb 9 19:50:37.666673 kernel: acpiphp: Slot [52] registered Feb 9 19:50:37.666678 kernel: acpiphp: Slot [53] registered Feb 9 19:50:37.666684 kernel: acpiphp: Slot [54] registered Feb 9 19:50:37.666689 kernel: acpiphp: Slot [55] registered Feb 9 19:50:37.666695 kernel: acpiphp: Slot [56] registered Feb 9 19:50:37.666701 kernel: acpiphp: Slot [57] registered Feb 9 19:50:37.666707 kernel: acpiphp: Slot [58] registered Feb 9 19:50:37.666713 kernel: acpiphp: Slot [59] registered Feb 9 19:50:37.666719 kernel: acpiphp: Slot [60] registered Feb 9 19:50:37.666724 kernel: acpiphp: Slot [61] registered Feb 9 19:50:37.666730 kernel: acpiphp: Slot [62] registered Feb 9 19:50:37.666735 kernel: acpiphp: Slot [63] registered Feb 9 19:50:37.666785 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 9 19:50:37.666836 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 19:50:37.666886 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 19:50:37.666982 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:50:37.667047 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 9 19:50:37.667096 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Feb 9 19:50:37.667141 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Feb 9 19:50:37.667186 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Feb 9 19:50:37.667231 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Feb 9 19:50:37.667276 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 9 19:50:37.667321 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 9 19:50:37.667370 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 9 19:50:37.667422 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 9 19:50:37.667471 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 9 19:50:37.667540 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 9 19:50:37.667588 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 19:50:37.667634 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 19:50:37.667679 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 19:50:37.667728 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 19:50:37.667774 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 19:50:37.667835 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 19:50:37.667926 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 19:50:37.667996 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 19:50:37.668042 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 19:50:37.668087 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:50:37.668132 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 19:50:37.668180 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 19:50:37.668224 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 19:50:37.668268 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:50:37.668313 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 19:50:37.668357 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 19:50:37.668401 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:50:37.668449 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 19:50:37.668526 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 19:50:37.668604 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:50:37.668658 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 19:50:37.668702 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 19:50:37.668746 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:50:37.668793 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 19:50:37.668838 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 19:50:37.668881 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:50:37.668927 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 19:50:37.668979 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 19:50:37.669026 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:50:37.669094 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 9 19:50:37.669144 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 9 19:50:37.669206 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 9 19:50:37.669252 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 9 19:50:37.669298 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 9 19:50:37.669344 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 19:50:37.669390 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 9 19:50:37.669436 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:50:37.669482 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 19:50:37.669578 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 19:50:37.669640 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 19:50:37.669685 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 19:50:37.669730 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 19:50:37.669775 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 19:50:37.669819 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 19:50:37.669864 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:50:37.669909 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 19:50:37.669969 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 19:50:37.670041 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 19:50:37.670089 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:50:37.670136 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 19:50:37.670182 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 19:50:37.670227 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:50:37.670273 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 19:50:37.670321 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 19:50:37.670365 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:50:37.670411 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 19:50:37.670455 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 19:50:37.670520 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:50:37.670569 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 19:50:37.670614 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 19:50:37.670659 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:50:37.670709 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 19:50:37.670753 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 19:50:37.670799 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:50:37.670845 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 19:50:37.670890 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 19:50:37.670934 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 19:50:37.670981 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:50:37.671026 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 19:50:37.671073 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 19:50:37.671119 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 19:50:37.671164 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:50:37.671209 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 19:50:37.671253 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 19:50:37.671298 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 19:50:37.671342 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:50:37.671388 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 19:50:37.671435 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 19:50:37.671479 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:50:37.674701 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 19:50:37.674751 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 19:50:37.674797 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:50:37.674845 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 19:50:37.674890 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 19:50:37.674935 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:50:37.674984 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 19:50:37.675029 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 19:50:37.675074 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:50:37.675119 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 19:50:37.675163 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 19:50:37.675207 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:50:37.675253 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 19:50:37.675298 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 19:50:37.675345 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 19:50:37.675390 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:50:37.675437 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 19:50:37.678883 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 19:50:37.678945 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 19:50:37.678996 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:50:37.679046 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 19:50:37.679094 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 19:50:37.679139 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:50:37.679185 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 19:50:37.679230 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 19:50:37.679276 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:50:37.679322 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 19:50:37.679367 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 19:50:37.679411 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:50:37.679460 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 19:50:37.680387 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 19:50:37.680451 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:50:37.680516 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 19:50:37.680565 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 19:50:37.680609 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:50:37.680737 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 19:50:37.680996 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 19:50:37.681052 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:50:37.681060 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 9 19:50:37.681066 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 9 19:50:37.681072 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 9 19:50:37.681078 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:50:37.681083 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 9 19:50:37.681089 kernel: iommu: Default domain type: Translated Feb 9 19:50:37.681095 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:50:37.681144 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 9 19:50:37.681195 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:50:37.681243 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 9 19:50:37.681251 kernel: vgaarb: loaded Feb 9 19:50:37.681257 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:50:37.681263 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:50:37.681269 kernel: PTP clock support registered Feb 9 19:50:37.681274 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:50:37.681280 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:50:37.681286 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 9 19:50:37.681293 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 9 19:50:37.681298 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 9 19:50:37.681304 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 9 19:50:37.681309 kernel: clocksource: Switched to clocksource tsc-early Feb 9 19:50:37.681315 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:50:37.681321 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:50:37.681326 kernel: pnp: PnP ACPI init Feb 9 19:50:37.681378 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 9 19:50:37.681424 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 9 19:50:37.681471 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 9 19:50:37.683114 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 9 19:50:37.683164 kernel: pnp 00:06: [dma 2] Feb 9 19:50:37.683210 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 9 19:50:37.683253 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 9 19:50:37.683298 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 9 19:50:37.683311 kernel: pnp: PnP ACPI: found 8 devices Feb 9 19:50:37.683318 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:50:37.683324 kernel: NET: Registered PF_INET protocol family Feb 9 19:50:37.683330 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:50:37.683335 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:50:37.683341 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:50:37.683346 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:50:37.683352 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:50:37.683357 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:50:37.683364 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:50:37.683370 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:50:37.683376 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:50:37.683381 kernel: NET: Registered PF_XDP protocol family Feb 9 19:50:37.683432 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 9 19:50:37.683479 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 9 19:50:37.683565 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 9 19:50:37.683615 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 9 19:50:37.683661 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 9 19:50:37.683707 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 9 19:50:37.683752 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 9 19:50:37.683797 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 9 19:50:37.683842 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 9 19:50:37.683889 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 9 19:50:37.683933 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 9 19:50:37.683983 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 9 19:50:37.684029 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 9 19:50:37.684074 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 9 19:50:37.684119 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 9 19:50:37.684166 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 9 19:50:37.684210 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 9 19:50:37.684255 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 9 19:50:37.684299 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 9 19:50:37.684345 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 9 19:50:37.684390 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 9 19:50:37.684437 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 9 19:50:37.684482 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 9 19:50:37.685428 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:50:37.685506 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:50:37.685554 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.685599 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.685648 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.685692 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.685736 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.685780 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.685824 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.685868 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.685912 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.685972 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686029 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686074 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686118 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686163 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686207 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686250 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686294 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686338 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686384 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686429 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686472 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686536 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686581 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686635 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686721 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686769 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686816 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686860 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686903 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.686947 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.686991 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687035 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687078 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687122 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687165 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687211 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687255 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687299 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687343 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687387 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687431 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687475 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687586 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687637 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687683 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687727 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687772 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687816 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687860 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687904 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.687948 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.687995 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688043 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688087 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688131 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688175 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688220 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688263 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688307 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688350 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688394 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688437 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688544 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688598 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688642 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688686 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688729 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688772 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688816 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688860 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688903 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.688951 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.688999 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.689045 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.689088 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.689132 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.689176 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.689221 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.689264 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.689308 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.689384 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.689430 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.689474 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 19:50:37.689536 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:50:37.691001 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 19:50:37.691062 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 9 19:50:37.691113 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 19:50:37.691453 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 19:50:37.691526 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:50:37.691583 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 9 19:50:37.691902 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 19:50:37.691956 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 19:50:37.692004 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 19:50:37.692065 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:50:37.692111 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 19:50:37.692156 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 19:50:37.692200 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 19:50:37.692244 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:50:37.692293 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 19:50:37.692337 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 19:50:37.692382 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 19:50:37.692425 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:50:37.692470 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 19:50:37.692591 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 19:50:37.692638 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:50:37.692977 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 19:50:37.693049 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 19:50:37.693097 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:50:37.693143 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 19:50:37.694277 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 19:50:37.694331 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:50:37.694380 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 19:50:37.694426 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 19:50:37.694474 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:50:37.694545 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 19:50:37.694592 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 19:50:37.694637 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:50:37.694684 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 9 19:50:37.694729 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 19:50:37.694773 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 19:50:37.694816 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 19:50:37.694861 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:50:37.694909 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 19:50:37.694953 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 19:50:37.694998 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 19:50:37.695042 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:50:37.695088 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 19:50:37.695132 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 19:50:37.695175 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 19:50:37.695221 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:50:37.695264 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 19:50:37.695308 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 19:50:37.695573 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:50:37.695629 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 19:50:37.695678 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 19:50:37.695725 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:50:37.695792 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 19:50:37.696058 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 19:50:37.696131 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:50:37.696436 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 19:50:37.696525 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 19:50:37.696580 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:50:37.696648 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 19:50:37.701578 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 19:50:37.701632 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:50:37.701681 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 19:50:37.701727 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 19:50:37.701772 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 19:50:37.701817 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:50:37.701863 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 19:50:37.701908 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 19:50:37.701956 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 19:50:37.702001 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:50:37.702046 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 19:50:37.702090 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 19:50:37.702134 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 19:50:37.702178 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:50:37.702223 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 19:50:37.702502 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 19:50:37.702558 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:50:37.702610 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 19:50:37.702660 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 19:50:37.702707 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:50:37.702752 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 19:50:37.702798 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 19:50:37.702842 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:50:37.702887 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 19:50:37.702931 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 19:50:37.702976 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:50:37.703021 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 19:50:37.703072 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 19:50:37.703118 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:50:37.703165 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 19:50:37.703209 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 19:50:37.703253 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 19:50:37.703298 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:50:37.703344 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 19:50:37.703389 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 19:50:37.703433 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 19:50:37.703481 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:50:37.703535 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 19:50:37.703580 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 19:50:37.703624 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:50:37.703669 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 19:50:37.703714 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 19:50:37.703758 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:50:37.703803 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 19:50:37.703847 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 19:50:37.703892 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:50:37.703949 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 19:50:37.703994 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 19:50:37.704039 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:50:37.704083 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 19:50:37.704127 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 19:50:37.704171 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:50:37.704216 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 19:50:37.704260 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 19:50:37.704305 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:50:37.704353 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 19:50:37.704394 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 19:50:37.704433 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 19:50:37.704473 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 19:50:37.704524 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 19:50:37.704566 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 19:50:37.704605 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Feb 9 19:50:37.704647 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Feb 9 19:50:37.704692 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 9 19:50:37.704733 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 9 19:50:37.704774 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:50:37.704815 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 19:50:37.704855 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 19:50:37.704896 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 19:50:37.704939 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 19:50:37.704979 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 19:50:37.705021 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 19:50:37.705061 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Feb 9 19:50:37.705101 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Feb 9 19:50:37.705147 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 9 19:50:37.705188 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 9 19:50:37.705229 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:50:37.705277 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 9 19:50:37.705319 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 9 19:50:37.705360 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:50:37.705405 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 9 19:50:37.705446 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 9 19:50:37.705695 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:50:37.705752 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 9 19:50:37.705801 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:50:37.705847 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 9 19:50:37.705912 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:50:37.706172 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 9 19:50:37.706228 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:50:37.706282 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 9 19:50:37.706331 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:50:37.706379 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 9 19:50:37.706422 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:50:37.706468 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 9 19:50:37.706742 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 9 19:50:37.706792 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:50:37.706839 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 9 19:50:37.706882 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 9 19:50:37.706922 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:50:37.706966 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 9 19:50:37.707009 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 9 19:50:37.707050 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:50:37.707097 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 9 19:50:37.707139 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:50:37.707187 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 9 19:50:37.707229 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:50:37.707273 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 9 19:50:37.707315 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:50:37.707360 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 9 19:50:37.707403 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:50:37.707449 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 9 19:50:37.707546 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:50:37.707592 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 9 19:50:37.707633 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 9 19:50:37.707674 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:50:37.707721 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 9 19:50:37.707762 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 9 19:50:37.707972 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:50:37.708023 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 9 19:50:37.708066 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 9 19:50:37.708383 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:50:37.708435 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 9 19:50:37.708481 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:50:37.708690 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 9 19:50:37.708740 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:50:37.708787 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 9 19:50:37.708830 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:50:37.708876 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 9 19:50:37.708922 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:50:37.708991 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 9 19:50:37.709053 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:50:37.709101 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 9 19:50:37.709144 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 9 19:50:37.709188 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:50:37.709253 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 9 19:50:37.709296 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 9 19:50:37.709339 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:50:37.709385 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 9 19:50:37.709428 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:50:37.709474 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 9 19:50:37.709545 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:50:37.709591 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 9 19:50:37.709634 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:50:37.709680 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 9 19:50:37.709722 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:50:37.709768 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 9 19:50:37.709814 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:50:37.709862 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 9 19:50:37.709905 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:50:37.709961 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:50:37.709971 kernel: PCI: CLS 32 bytes, default 64 Feb 9 19:50:37.709978 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:50:37.709984 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 19:50:37.710198 kernel: clocksource: Switched to clocksource tsc Feb 9 19:50:37.710208 kernel: Initialise system trusted keyrings Feb 9 19:50:37.710215 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:50:37.710221 kernel: Key type asymmetric registered Feb 9 19:50:37.710227 kernel: Asymmetric key parser 'x509' registered Feb 9 19:50:37.710233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:50:37.710239 kernel: io scheduler mq-deadline registered Feb 9 19:50:37.710244 kernel: io scheduler kyber registered Feb 9 19:50:37.710251 kernel: io scheduler bfq registered Feb 9 19:50:37.710308 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 9 19:50:37.710361 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.710421 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 9 19:50:37.710471 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.710766 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 9 19:50:37.711097 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.711157 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 9 19:50:37.711211 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.711265 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 9 19:50:37.711585 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.711642 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 9 19:50:37.711693 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.711739 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 9 19:50:37.711790 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.711836 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 9 19:50:37.711882 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.711928 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 9 19:50:37.711981 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712029 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 9 19:50:37.712077 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712124 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 9 19:50:37.712169 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712215 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 9 19:50:37.712260 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712306 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 9 19:50:37.712354 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712399 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 9 19:50:37.712445 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712524 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 9 19:50:37.712572 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712620 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 9 19:50:37.712666 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712712 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 9 19:50:37.712757 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712801 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 9 19:50:37.712845 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.712905 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 9 19:50:37.712956 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713019 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 9 19:50:37.713067 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713113 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 9 19:50:37.713158 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713204 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 9 19:50:37.713275 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713542 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 9 19:50:37.713594 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713641 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 9 19:50:37.713687 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713736 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 9 19:50:37.713782 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713827 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 9 19:50:37.713878 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.713927 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 9 19:50:37.713976 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.714024 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 9 19:50:37.714088 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.714356 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 9 19:50:37.714411 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.714460 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 9 19:50:37.714547 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.714806 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 9 19:50:37.714858 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.714907 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 9 19:50:37.714954 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:50:37.714963 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:50:37.714974 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:50:37.714980 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:50:37.714987 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 9 19:50:37.714993 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:50:37.714999 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:50:37.715048 kernel: rtc_cmos 00:01: registered as rtc0 Feb 9 19:50:37.715092 kernel: rtc_cmos 00:01: setting system clock to 2024-02-09T19:50:37 UTC (1707508237) Feb 9 19:50:37.715101 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:50:37.715143 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 9 19:50:37.715152 kernel: fail to initialize ptp_kvm Feb 9 19:50:37.715158 kernel: intel_pstate: CPU model not supported Feb 9 19:50:37.715164 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:50:37.715170 kernel: Segment Routing with IPv6 Feb 9 19:50:37.715176 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:50:37.715182 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:50:37.715188 kernel: Key type dns_resolver registered Feb 9 19:50:37.715195 kernel: IPI shorthand broadcast: enabled Feb 9 19:50:37.715202 kernel: sched_clock: Marking stable (872411763, 223768819)->(1164609077, -68428495) Feb 9 19:50:37.715208 kernel: registered taskstats version 1 Feb 9 19:50:37.715214 kernel: Loading compiled-in X.509 certificates Feb 9 19:50:37.715220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:50:37.715226 kernel: Key type .fscrypt registered Feb 9 19:50:37.715232 kernel: Key type fscrypt-provisioning registered Feb 9 19:50:37.715238 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:50:37.715243 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:50:37.715250 kernel: ima: No architecture policies found Feb 9 19:50:37.715257 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:50:37.715263 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:50:37.715269 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:50:37.715275 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:50:37.715281 kernel: Run /init as init process Feb 9 19:50:37.715287 kernel: with arguments: Feb 9 19:50:37.715293 kernel: /init Feb 9 19:50:37.715299 kernel: with environment: Feb 9 19:50:37.715305 kernel: HOME=/ Feb 9 19:50:37.715312 kernel: TERM=linux Feb 9 19:50:37.715318 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:50:37.715325 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:50:37.715333 systemd[1]: Detected virtualization vmware. Feb 9 19:50:37.715340 systemd[1]: Detected architecture x86-64. Feb 9 19:50:37.715346 systemd[1]: Running in initrd. Feb 9 19:50:37.715352 systemd[1]: No hostname configured, using default hostname. Feb 9 19:50:37.715358 systemd[1]: Hostname set to . Feb 9 19:50:37.715365 systemd[1]: Initializing machine ID from random generator. Feb 9 19:50:37.715371 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:50:37.715377 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:50:37.715383 systemd[1]: Reached target cryptsetup.target. Feb 9 19:50:37.715389 systemd[1]: Reached target paths.target. Feb 9 19:50:37.715395 systemd[1]: Reached target slices.target. Feb 9 19:50:37.715401 systemd[1]: Reached target swap.target. Feb 9 19:50:37.715407 systemd[1]: Reached target timers.target. Feb 9 19:50:37.715415 systemd[1]: Listening on iscsid.socket. Feb 9 19:50:37.715421 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:50:37.715427 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:50:37.715433 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:50:37.715439 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:50:37.715446 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:50:37.715452 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:50:37.715458 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:50:37.715465 systemd[1]: Reached target sockets.target. Feb 9 19:50:37.715471 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:50:37.715477 systemd[1]: Finished network-cleanup.service. Feb 9 19:50:37.715490 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:50:37.715497 systemd[1]: Starting systemd-journald.service... Feb 9 19:50:37.715503 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:50:37.715509 systemd[1]: Starting systemd-resolved.service... Feb 9 19:50:37.715515 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:50:37.715521 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:50:37.715529 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:50:37.715536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:50:37.715542 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:50:37.715548 kernel: audit: type=1130 audit(1707508237.660:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.715554 systemd[1]: Started systemd-resolved.service. Feb 9 19:50:37.715560 kernel: audit: type=1130 audit(1707508237.670:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.715567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:50:37.715573 kernel: audit: type=1130 audit(1707508237.674:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.715580 systemd[1]: Reached target nss-lookup.target. Feb 9 19:50:37.715586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:50:37.715600 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:50:37.715606 kernel: Bridge firewalling registered Feb 9 19:50:37.715612 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:50:37.715618 kernel: audit: type=1130 audit(1707508237.696:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.715624 kernel: SCSI subsystem initialized Feb 9 19:50:37.715630 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:50:37.715637 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:50:37.715643 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:50:37.715649 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:50:37.715659 systemd-journald[216]: Journal started Feb 9 19:50:37.715691 systemd-journald[216]: Runtime Journal (/run/log/journal/3dd74efc9c004ba9a26a42a59680f673) is 4.8M, max 38.8M, 34.0M free. Feb 9 19:50:37.716844 systemd[1]: Started systemd-journald.service. Feb 9 19:50:37.716860 kernel: audit: type=1130 audit(1707508237.715:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.664779 systemd-resolved[218]: Positive Trust Anchors: Feb 9 19:50:37.664786 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:50:37.664817 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:50:37.666455 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 9 19:50:37.667035 systemd-modules-load[217]: Inserted module 'overlay' Feb 9 19:50:37.685981 systemd-modules-load[217]: Inserted module 'br_netfilter' Feb 9 19:50:37.721835 dracut-cmdline[233]: dracut-dracut-053 Feb 9 19:50:37.721835 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:50:37.721835 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:50:37.733674 kernel: audit: type=1130 audit(1707508237.721:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.722987 systemd-modules-load[217]: Inserted module 'dm_multipath' Feb 9 19:50:37.723303 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:50:37.723777 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:50:37.735300 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:50:37.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.738519 kernel: audit: type=1130 audit(1707508237.734:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.753496 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:50:37.761498 kernel: iscsi: registered transport (tcp) Feb 9 19:50:37.774602 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:50:37.774625 kernel: QLogic iSCSI HBA Driver Feb 9 19:50:37.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.790315 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:50:37.793690 kernel: audit: type=1130 audit(1707508237.788:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:37.790942 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:50:37.828514 kernel: raid6: avx2x4 gen() 48414 MB/s Feb 9 19:50:37.845493 kernel: raid6: avx2x4 xor() 21857 MB/s Feb 9 19:50:37.862497 kernel: raid6: avx2x2 gen() 54530 MB/s Feb 9 19:50:37.879527 kernel: raid6: avx2x2 xor() 32024 MB/s Feb 9 19:50:37.896496 kernel: raid6: avx2x1 gen() 44030 MB/s Feb 9 19:50:37.913526 kernel: raid6: avx2x1 xor() 27567 MB/s Feb 9 19:50:37.930497 kernel: raid6: sse2x4 gen() 21120 MB/s Feb 9 19:50:37.947528 kernel: raid6: sse2x4 xor() 11898 MB/s Feb 9 19:50:37.964529 kernel: raid6: sse2x2 gen() 21719 MB/s Feb 9 19:50:37.981495 kernel: raid6: sse2x2 xor() 13479 MB/s Feb 9 19:50:37.998493 kernel: raid6: sse2x1 gen() 18187 MB/s Feb 9 19:50:38.015707 kernel: raid6: sse2x1 xor() 8986 MB/s Feb 9 19:50:38.015723 kernel: raid6: using algorithm avx2x2 gen() 54530 MB/s Feb 9 19:50:38.015731 kernel: raid6: .... xor() 32024 MB/s, rmw enabled Feb 9 19:50:38.016834 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:50:38.025501 kernel: xor: automatically using best checksumming function avx Feb 9 19:50:38.084501 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:50:38.089026 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:50:38.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:38.089652 systemd[1]: Starting systemd-udevd.service... Feb 9 19:50:38.087000 audit: BPF prog-id=7 op=LOAD Feb 9 19:50:38.087000 audit: BPF prog-id=8 op=LOAD Feb 9 19:50:38.094498 kernel: audit: type=1130 audit(1707508238.087:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:38.099640 systemd-udevd[415]: Using default interface naming scheme 'v252'. Feb 9 19:50:38.102189 systemd[1]: Started systemd-udevd.service. Feb 9 19:50:38.102677 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:50:38.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:38.110026 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 9 19:50:38.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:38.125481 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:50:38.126016 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:50:38.184685 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:50:38.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:38.234724 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 9 19:50:38.234756 kernel: vmw_pvscsi: using 64bit dma Feb 9 19:50:38.235934 kernel: vmw_pvscsi: max_id: 16 Feb 9 19:50:38.235957 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 9 19:50:38.245496 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 9 19:50:38.245512 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 9 19:50:38.245520 kernel: vmw_pvscsi: using MSI-X Feb 9 19:50:38.249319 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 9 19:50:38.249403 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 9 19:50:38.249464 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Feb 9 19:50:38.250498 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 9 19:50:38.262850 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 9 19:50:38.262945 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:50:38.272655 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:50:38.272681 kernel: AES CTR mode by8 optimization enabled Feb 9 19:50:38.273495 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 9 19:50:38.276500 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 9 19:50:38.278501 kernel: libata version 3.00 loaded. Feb 9 19:50:38.282531 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 9 19:50:38.283499 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 9 19:50:38.283599 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:50:38.283700 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 9 19:50:38.283770 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 9 19:50:38.283828 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 9 19:50:38.285498 kernel: scsi host1: ata_piix Feb 9 19:50:38.289497 kernel: scsi host2: ata_piix Feb 9 19:50:38.289584 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 9 19:50:38.289598 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 9 19:50:38.289605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:50:38.290512 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:50:38.455505 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 9 19:50:38.459502 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 9 19:50:38.486915 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 9 19:50:38.487021 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (466) Feb 9 19:50:38.487033 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:50:38.490292 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:50:38.493021 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:50:38.495123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:50:38.498426 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:50:38.498562 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:50:38.499186 systemd[1]: Starting disk-uuid.service... Feb 9 19:50:38.512689 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:50:38.524494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:50:38.532496 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:50:39.536388 disk-uuid[547]: The operation has completed successfully. Feb 9 19:50:39.536703 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:50:39.568742 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:50:39.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.568806 systemd[1]: Finished disk-uuid.service. Feb 9 19:50:39.569390 systemd[1]: Starting verity-setup.service... Feb 9 19:50:39.578522 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:50:39.624406 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:50:39.624954 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:50:39.626753 systemd[1]: Finished verity-setup.service. Feb 9 19:50:39.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.677496 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:50:39.677608 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:50:39.678206 systemd[1]: Starting afterburn-network-kargs.service... Feb 9 19:50:39.678689 systemd[1]: Starting ignition-setup.service... Feb 9 19:50:39.696073 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:50:39.696112 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:50:39.696126 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:50:39.700502 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:50:39.707726 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:50:39.713279 systemd[1]: Finished ignition-setup.service. Feb 9 19:50:39.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.713832 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:50:39.764502 systemd[1]: Finished afterburn-network-kargs.service. Feb 9 19:50:39.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.765344 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:50:39.811258 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:50:39.812103 systemd[1]: Starting systemd-networkd.service... Feb 9 19:50:39.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.810000 audit: BPF prog-id=9 op=LOAD Feb 9 19:50:39.825682 systemd-networkd[734]: lo: Link UP Feb 9 19:50:39.825688 systemd-networkd[734]: lo: Gained carrier Feb 9 19:50:39.825945 systemd-networkd[734]: Enumeration completed Feb 9 19:50:39.826136 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 9 19:50:39.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.826685 systemd[1]: Started systemd-networkd.service. Feb 9 19:50:39.826830 systemd[1]: Reached target network.target. Feb 9 19:50:39.827587 systemd[1]: Starting iscsiuio.service... Feb 9 19:50:39.831641 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 19:50:39.831758 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 19:50:39.831272 systemd-networkd[734]: ens192: Link UP Feb 9 19:50:39.831274 systemd-networkd[734]: ens192: Gained carrier Feb 9 19:50:39.832183 systemd[1]: Started iscsiuio.service. Feb 9 19:50:39.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.832990 systemd[1]: Starting iscsid.service... Feb 9 19:50:39.834877 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:50:39.834877 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:50:39.834877 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:50:39.834877 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:50:39.834877 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:50:39.835820 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:50:39.836187 systemd[1]: Started iscsid.service. Feb 9 19:50:39.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.837789 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:50:39.844142 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:50:39.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.844457 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:50:39.844990 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:50:39.845221 systemd[1]: Reached target remote-fs.target. Feb 9 19:50:39.845852 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:50:39.852042 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:50:39.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.866906 ignition[606]: Ignition 2.14.0 Feb 9 19:50:39.866914 ignition[606]: Stage: fetch-offline Feb 9 19:50:39.866947 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:50:39.866964 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:50:39.870732 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:50:39.870966 ignition[606]: parsed url from cmdline: "" Feb 9 19:50:39.871007 ignition[606]: no config URL provided Feb 9 19:50:39.871134 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:50:39.871285 ignition[606]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:50:39.879324 ignition[606]: config successfully fetched Feb 9 19:50:39.879386 ignition[606]: parsing config with SHA512: f74e50a3067105bf2add124ccd4e600c54c3533fedab3c428aa32451672564b7a474cd3f1ed1319f94d4bb66a2bf9a35f1dd06cf9c1bc3d33bdcd0fe4328eb2b Feb 9 19:50:39.907193 unknown[606]: fetched base config from "system" Feb 9 19:50:39.907403 unknown[606]: fetched user config from "vmware" Feb 9 19:50:39.908099 ignition[606]: fetch-offline: fetch-offline passed Feb 9 19:50:39.908293 ignition[606]: Ignition finished successfully Feb 9 19:50:39.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.909022 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:50:39.909177 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:50:39.909672 systemd[1]: Starting ignition-kargs.service... Feb 9 19:50:39.915026 ignition[754]: Ignition 2.14.0 Feb 9 19:50:39.915274 ignition[754]: Stage: kargs Feb 9 19:50:39.915444 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:50:39.915604 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:50:39.916984 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:50:39.918585 ignition[754]: kargs: kargs passed Feb 9 19:50:39.918727 ignition[754]: Ignition finished successfully Feb 9 19:50:39.919589 systemd[1]: Finished ignition-kargs.service. Feb 9 19:50:39.920160 systemd[1]: Starting ignition-disks.service... Feb 9 19:50:39.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.924406 ignition[760]: Ignition 2.14.0 Feb 9 19:50:39.924622 ignition[760]: Stage: disks Feb 9 19:50:39.924789 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:50:39.924940 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:50:39.926297 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:50:39.928074 ignition[760]: disks: disks passed Feb 9 19:50:39.928217 ignition[760]: Ignition finished successfully Feb 9 19:50:39.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.928812 systemd[1]: Finished ignition-disks.service. Feb 9 19:50:39.928959 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:50:39.929051 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:50:39.929134 systemd[1]: Reached target local-fs.target. Feb 9 19:50:39.929215 systemd[1]: Reached target sysinit.target. Feb 9 19:50:39.929292 systemd[1]: Reached target basic.target. Feb 9 19:50:39.929833 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:50:39.954716 systemd-fsck[768]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:50:39.956260 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:50:39.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:39.957000 systemd[1]: Mounting sysroot.mount... Feb 9 19:50:39.966434 systemd[1]: Mounted sysroot.mount. Feb 9 19:50:39.966745 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:50:39.966648 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:50:39.967837 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:50:39.968289 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:50:39.968315 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:50:39.968333 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:50:39.970050 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:50:39.970790 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:50:39.974504 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:50:39.978662 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:50:39.981774 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:50:39.983983 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:50:40.014963 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:50:40.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:40.015582 systemd[1]: Starting ignition-mount.service... Feb 9 19:50:40.016054 systemd[1]: Starting sysroot-boot.service... Feb 9 19:50:40.020229 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:50:40.025491 ignition[820]: INFO : Ignition 2.14.0 Feb 9 19:50:40.025752 ignition[820]: INFO : Stage: mount Feb 9 19:50:40.025935 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:50:40.026096 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:50:40.027773 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:50:40.029593 ignition[820]: INFO : mount: mount passed Feb 9 19:50:40.029748 ignition[820]: INFO : Ignition finished successfully Feb 9 19:50:40.030321 systemd[1]: Finished ignition-mount.service. Feb 9 19:50:40.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:40.038066 systemd[1]: Finished sysroot-boot.service. Feb 9 19:50:40.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:40.643361 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:50:40.656510 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (829) Feb 9 19:50:40.658953 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:50:40.658986 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:50:40.658997 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:50:40.662505 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:50:40.664393 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:50:40.665042 systemd[1]: Starting ignition-files.service... Feb 9 19:50:40.675306 ignition[849]: INFO : Ignition 2.14.0 Feb 9 19:50:40.675628 ignition[849]: INFO : Stage: files Feb 9 19:50:40.675807 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:50:40.675960 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:50:40.677465 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:50:40.679912 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:50:40.680499 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:50:40.680659 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:50:40.684061 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:50:40.684340 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:50:40.684896 unknown[849]: wrote ssh authorized keys file for user: core Feb 9 19:50:40.685120 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:50:40.685868 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:50:40.686115 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:50:40.720376 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:50:40.775197 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:50:40.775469 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:50:40.775709 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:50:40.775902 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:50:40.776110 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:50:41.241246 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:50:41.338525 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:50:41.338895 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:50:41.339122 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:50:41.339358 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:50:41.498734 systemd-networkd[734]: ens192: Gained IPv6LL Feb 9 19:50:41.777111 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:50:41.837409 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:50:41.837779 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:50:41.843566 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:50:41.843803 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:50:41.908055 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:50:42.061201 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:50:42.061509 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:50:42.061509 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:50:42.061509 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:50:42.108116 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:50:42.522923 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:50:42.523224 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:50:42.523224 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:50:42.523224 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:50:42.569794 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:50:42.731329 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:50:42.731792 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:50:42.733740 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:50:42.738560 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:50:42.738780 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:50:42.745911 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 19:50:42.746110 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:50:42.770802 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106321208" Feb 9 19:50:42.772331 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106321208": device or resource busy Feb 9 19:50:42.772331 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem106321208", trying btrfs: device or resource busy Feb 9 19:50:42.772331 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106321208" Feb 9 19:50:42.772905 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (849) Feb 9 19:50:42.773000 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106321208" Feb 9 19:50:42.782035 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem106321208" Feb 9 19:50:42.782442 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem106321208" Feb 9 19:50:42.782681 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 19:50:42.783266 systemd[1]: mnt-oem106321208.mount: Deactivated successfully. Feb 9 19:50:42.787843 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 19:50:42.788216 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 19:50:42.788419 ignition[849]: INFO : files: op(15): [started] processing unit "vmtoolsd.service" Feb 9 19:50:42.788588 ignition[849]: INFO : files: op(15): [finished] processing unit "vmtoolsd.service" Feb 9 19:50:42.788746 ignition[849]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:50:42.788926 ignition[849]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:50:42.789189 ignition[849]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:50:42.789392 ignition[849]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:50:42.789556 ignition[849]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 9 19:50:42.789723 ignition[849]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:50:42.789977 ignition[849]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:50:42.790176 ignition[849]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 9 19:50:42.790478 ignition[849]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 19:50:42.790478 ignition[849]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:50:42.790478 ignition[849]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:50:42.790478 ignition[849]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 19:50:42.790478 ignition[849]: INFO : files: op(1c): [started] processing unit "coreos-metadata.service" Feb 9 19:50:42.790478 ignition[849]: INFO : files: op(1c): op(1d): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(1c): op(1d): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(1c): [finished] processing unit "coreos-metadata.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(1e): [started] processing unit "containerd.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(1e): op(1f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(1e): op(1f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(1e): [finished] processing unit "containerd.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(20): [started] setting preset to enabled for "vmtoolsd.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(20): [finished] setting preset to enabled for "vmtoolsd.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(24): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:50:42.791684 ignition[849]: INFO : files: op(24): op(25): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:50:43.048249 ignition[849]: INFO : files: op(24): op(25): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:50:43.048249 ignition[849]: INFO : files: op(24): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:50:43.048249 ignition[849]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:50:43.048249 ignition[849]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:50:43.048249 ignition[849]: INFO : files: files passed Feb 9 19:50:43.048249 ignition[849]: INFO : Ignition finished successfully Feb 9 19:50:43.048703 systemd[1]: Finished ignition-files.service. Feb 9 19:50:43.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.049898 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:50:43.054713 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 9 19:50:43.054729 kernel: audit: type=1130 audit(1707508243.047:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.050062 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:50:43.050451 systemd[1]: Starting ignition-quench.service... Feb 9 19:50:43.060417 kernel: audit: type=1130 audit(1707508243.053:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.060431 kernel: audit: type=1131 audit(1707508243.053:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.054737 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:50:43.054783 systemd[1]: Finished ignition-quench.service. Feb 9 19:50:43.061599 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:50:43.061960 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:50:43.065018 kernel: audit: type=1130 audit(1707508243.060:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.062284 systemd[1]: Reached target ignition-complete.target. Feb 9 19:50:43.065530 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:50:43.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.074156 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:50:43.080464 kernel: audit: type=1130 audit(1707508243.072:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.080480 kernel: audit: type=1131 audit(1707508243.072:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.074220 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:50:43.074417 systemd[1]: Reached target initrd-fs.target. Feb 9 19:50:43.079459 systemd[1]: Reached target initrd.target. Feb 9 19:50:43.079620 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:50:43.080170 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:50:43.090502 kernel: audit: type=1130 audit(1707508243.085:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.087105 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:50:43.087714 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:50:43.094181 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:50:43.094452 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:50:43.094758 systemd[1]: Stopped target timers.target. Feb 9 19:50:43.095036 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:50:43.095246 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:50:43.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.096585 systemd[1]: Stopped target initrd.target. Feb 9 19:50:43.098150 systemd[1]: Stopped target basic.target. Feb 9 19:50:43.098344 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:50:43.098547 kernel: audit: type=1131 audit(1707508243.094:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.098543 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:50:43.098733 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:50:43.098913 systemd[1]: Stopped target remote-fs.target. Feb 9 19:50:43.099087 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:50:43.099279 systemd[1]: Stopped target sysinit.target. Feb 9 19:50:43.099453 systemd[1]: Stopped target local-fs.target. Feb 9 19:50:43.099637 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:50:43.099826 systemd[1]: Stopped target swap.target. Feb 9 19:50:43.099987 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:50:43.100072 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:50:43.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.100373 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:50:43.102910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:50:43.102993 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:50:43.105692 kernel: audit: type=1131 audit(1707508243.098:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.105710 kernel: audit: type=1131 audit(1707508243.101:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.103258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:50:43.103340 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:50:43.105910 systemd[1]: Stopped target paths.target. Feb 9 19:50:43.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.106119 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:50:43.110505 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:50:43.110700 systemd[1]: Stopped target slices.target. Feb 9 19:50:43.110868 systemd[1]: Stopped target sockets.target. Feb 9 19:50:43.111032 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:50:43.111114 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:50:43.111376 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:50:43.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.111449 systemd[1]: Stopped ignition-files.service. Feb 9 19:50:43.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.112185 systemd[1]: Stopping ignition-mount.service... Feb 9 19:50:43.113345 systemd[1]: Stopping iscsid.service... Feb 9 19:50:43.117449 iscsid[739]: iscsid shutting down. Feb 9 19:50:43.113419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:50:43.113511 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:50:43.114089 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:50:43.114184 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:50:43.114277 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:50:43.114455 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:50:43.114539 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:50:43.116259 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:50:43.116313 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:50:43.120494 ignition[888]: INFO : Ignition 2.14.0 Feb 9 19:50:43.120494 ignition[888]: INFO : Stage: umount Feb 9 19:50:43.120494 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:50:43.120494 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:50:43.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.120433 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:50:43.123346 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:50:43.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.120505 systemd[1]: Stopped iscsid.service. Feb 9 19:50:43.120652 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:50:43.120668 systemd[1]: Closed iscsid.socket. Feb 9 19:50:43.121978 systemd[1]: Stopping iscsiuio.service... Feb 9 19:50:43.123096 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:50:43.123145 systemd[1]: Stopped iscsiuio.service. Feb 9 19:50:43.124321 systemd[1]: Stopped target network.target. Feb 9 19:50:43.124613 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:50:43.124630 systemd[1]: Closed iscsiuio.socket. Feb 9 19:50:43.124880 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:50:43.125352 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:50:43.129019 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:50:43.129069 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:50:43.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.129530 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:50:43.129551 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:50:43.130105 systemd[1]: Stopping network-cleanup.service... Feb 9 19:50:43.130245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:50:43.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.130279 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:50:43.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.131086 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 9 19:50:43.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.131111 systemd[1]: Stopped afterburn-network-kargs.service. Feb 9 19:50:43.131241 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:50:43.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.131261 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:50:43.131598 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:50:43.131622 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:50:43.132374 ignition[888]: INFO : umount: umount passed Feb 9 19:50:43.132564 ignition[888]: INFO : Ignition finished successfully Feb 9 19:50:43.131000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:50:43.133294 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:50:43.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.133610 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:50:43.133672 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:50:43.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.134110 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:50:43.134159 systemd[1]: Stopped ignition-mount.service. Feb 9 19:50:43.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.135000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:50:43.134459 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:50:43.134480 systemd[1]: Stopped ignition-disks.service. Feb 9 19:50:43.134757 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:50:43.134777 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:50:43.134902 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:50:43.134921 systemd[1]: Stopped ignition-setup.service. Feb 9 19:50:43.136759 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:50:43.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.138691 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:50:43.138746 systemd[1]: Stopped network-cleanup.service. Feb 9 19:50:43.140629 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:50:43.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.142333 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:50:43.142397 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:50:43.142554 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:50:43.142572 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:50:43.142668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:50:43.142683 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:50:43.142768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:50:43.142787 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:50:43.142885 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:50:43.142904 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:50:43.142999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:50:43.143016 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:50:43.143434 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:50:43.143536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:50:43.143559 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:50:43.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.146634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:50:43.146677 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:50:43.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:43.150275 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:50:43.150320 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:50:43.150458 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:50:43.150555 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:50:43.150577 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:50:43.151003 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:50:43.162061 systemd[1]: Switching root. Feb 9 19:50:43.161000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:50:43.161000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:50:43.161000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:50:43.162000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:50:43.162000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:50:43.180042 systemd-journald[216]: Journal stopped Feb 9 19:50:45.730699 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Feb 9 19:50:45.730723 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:50:45.730732 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:50:45.730738 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:50:45.730743 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:50:45.730749 kernel: SELinux: policy capability open_perms=1 Feb 9 19:50:45.730756 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:50:45.730762 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:50:45.730767 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:50:45.730773 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:50:45.730778 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:50:45.730784 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:50:45.730791 systemd[1]: Successfully loaded SELinux policy in 84.180ms. Feb 9 19:50:45.730799 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.965ms. Feb 9 19:50:45.730807 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:50:45.730813 systemd[1]: Detected virtualization vmware. Feb 9 19:50:45.730820 systemd[1]: Detected architecture x86-64. Feb 9 19:50:45.730827 systemd[1]: Detected first boot. Feb 9 19:50:45.730834 systemd[1]: Initializing machine ID from random generator. Feb 9 19:50:45.730840 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:50:45.730846 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:50:45.730853 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:50:45.730861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:50:45.730869 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:50:45.730877 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:50:45.730884 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:50:45.730890 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:50:45.730897 systemd[1]: Created slice system-getty.slice. Feb 9 19:50:45.730903 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:50:45.730910 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:50:45.730916 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:50:45.730924 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:50:45.730930 systemd[1]: Created slice user.slice. Feb 9 19:50:45.730937 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:50:45.730943 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:50:45.730949 systemd[1]: Set up automount boot.automount. Feb 9 19:50:45.730956 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:50:45.730962 systemd[1]: Reached target integritysetup.target. Feb 9 19:50:45.730969 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:50:45.730975 systemd[1]: Reached target remote-fs.target. Feb 9 19:50:45.730984 systemd[1]: Reached target slices.target. Feb 9 19:50:45.730992 systemd[1]: Reached target swap.target. Feb 9 19:50:45.730999 systemd[1]: Reached target torcx.target. Feb 9 19:50:45.731006 systemd[1]: Reached target veritysetup.target. Feb 9 19:50:45.731013 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:50:45.731020 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:50:45.731028 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:50:45.731035 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:50:45.731044 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:50:45.731051 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:50:45.731058 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:50:45.731064 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:50:45.731071 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:50:45.731078 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:50:45.731087 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:50:45.731094 systemd[1]: Mounting media.mount... Feb 9 19:50:45.731101 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:50:45.731108 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:50:45.731115 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:50:45.731122 systemd[1]: Mounting tmp.mount... Feb 9 19:50:45.731129 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:50:45.731137 systemd[1]: Starting ignition-delete-config.service... Feb 9 19:50:45.731144 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:50:45.731152 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:50:45.731158 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:50:45.731165 systemd[1]: Starting modprobe@drm.service... Feb 9 19:50:45.731172 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:50:45.731179 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:50:45.731187 systemd[1]: Starting modprobe@loop.service... Feb 9 19:50:45.731196 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:50:45.731205 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:50:45.731214 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:50:45.731226 systemd[1]: Starting systemd-journald.service... Feb 9 19:50:45.731236 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:50:45.731244 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:50:45.731251 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:50:45.731261 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:50:45.731271 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:50:45.731279 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:50:45.731291 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:50:45.731300 systemd[1]: Mounted media.mount. Feb 9 19:50:45.731311 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:50:45.731319 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:50:45.731326 systemd[1]: Mounted tmp.mount. Feb 9 19:50:45.731333 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:50:45.731340 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:50:45.731346 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:50:45.731353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:50:45.731361 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:50:45.731368 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:50:45.731375 systemd[1]: Finished modprobe@drm.service. Feb 9 19:50:45.731381 kernel: loop: module loaded Feb 9 19:50:45.731388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:50:45.731394 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:50:45.731401 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:50:45.731408 systemd[1]: Finished modprobe@loop.service. Feb 9 19:50:45.731417 systemd-journald[1031]: Journal started Feb 9 19:50:45.731451 systemd-journald[1031]: Runtime Journal (/run/log/journal/93440164b77e4411a4d52726e14dfc6e) is 4.8M, max 38.8M, 34.0M free. Feb 9 19:50:45.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.740163 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:50:45.740180 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:50:45.740190 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:50:45.740198 systemd[1]: Reached target network-pre.target. Feb 9 19:50:45.740207 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:50:45.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.718000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:50:45.718000 audit[1031]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffbd69dda0 a2=4000 a3=7fffbd69de3c items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:45.718000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:50:45.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.740769 jq[1018]: true Feb 9 19:50:45.741266 jq[1044]: true Feb 9 19:50:45.744853 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:50:45.755352 kernel: fuse: init (API version 7.34) Feb 9 19:50:45.755387 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:50:45.755400 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:50:45.755413 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:50:45.755421 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:50:45.766641 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:50:45.766677 systemd[1]: Started systemd-journald.service. Feb 9 19:50:45.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.761508 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:50:45.761607 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:50:45.761754 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:50:45.762726 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:50:45.763564 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:50:45.764945 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:50:45.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.775681 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:50:45.776689 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:50:45.782068 systemd-journald[1031]: Time spent on flushing to /var/log/journal/93440164b77e4411a4d52726e14dfc6e is 19.896ms for 1981 entries. Feb 9 19:50:45.782068 systemd-journald[1031]: System Journal (/var/log/journal/93440164b77e4411a4d52726e14dfc6e) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:50:46.448419 systemd-journald[1031]: Received client request to flush runtime journal. Feb 9 19:50:45.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.840639 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:50:46.449086 udevadm[1095]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:50:46.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:45.862049 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:50:45.863131 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:50:46.144062 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:50:46.145217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:50:46.389402 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:50:46.389628 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:50:46.445776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:50:46.448972 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:50:46.480852 ignition[1056]: Ignition 2.14.0 Feb 9 19:50:46.481104 ignition[1056]: deleting config from guestinfo properties Feb 9 19:50:46.484086 ignition[1056]: Successfully deleted config Feb 9 19:50:46.484698 systemd[1]: Finished ignition-delete-config.service. Feb 9 19:50:46.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.577632 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:50:46.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.578661 systemd[1]: Starting systemd-udevd.service... Feb 9 19:50:46.590517 systemd-udevd[1110]: Using default interface naming scheme 'v252'. Feb 9 19:50:46.619542 systemd[1]: Started systemd-udevd.service. Feb 9 19:50:46.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.621129 systemd[1]: Starting systemd-networkd.service... Feb 9 19:50:46.628073 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:50:46.657760 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:50:46.657907 systemd[1]: Started systemd-userdbd.service. Feb 9 19:50:46.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.703505 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:50:46.717495 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:50:46.773013 systemd-networkd[1111]: lo: Link UP Feb 9 19:50:46.773018 systemd-networkd[1111]: lo: Gained carrier Feb 9 19:50:46.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.773719 systemd-networkd[1111]: Enumeration completed Feb 9 19:50:46.773793 systemd[1]: Started systemd-networkd.service. Feb 9 19:50:46.774401 systemd-networkd[1111]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 9 19:50:46.777437 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 19:50:46.777596 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 19:50:46.778976 systemd-networkd[1111]: ens192: Link UP Feb 9 19:50:46.779112 systemd-networkd[1111]: ens192: Gained carrier Feb 9 19:50:46.779492 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Feb 9 19:50:46.785635 (udev-worker)[1121]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 9 19:50:46.773000 audit[1122]: AVC avc: denied { confidentiality } for pid=1122 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:50:46.789531 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Feb 9 19:50:46.789778 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 9 19:50:46.773000 audit[1122]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55bd81bdc960 a1=32194 a2=7f6a56d0bbc5 a3=5 items=108 ppid=1110 pid=1122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:46.773000 audit: CWD cwd="/" Feb 9 19:50:46.773000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=1 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=2 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=3 name=(null) inode=17265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=4 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=5 name=(null) inode=17266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=6 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=7 name=(null) inode=17267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=8 name=(null) inode=17267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=9 name=(null) inode=17268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=10 name=(null) inode=17267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=11 name=(null) inode=17269 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=12 name=(null) inode=17267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=13 name=(null) inode=17270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=14 name=(null) inode=17267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=15 name=(null) inode=17271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=16 name=(null) inode=17267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=17 name=(null) inode=17272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=18 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=19 name=(null) inode=17273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=20 name=(null) inode=17273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=21 name=(null) inode=17274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=22 name=(null) inode=17273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=23 name=(null) inode=17275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=24 name=(null) inode=17273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=25 name=(null) inode=17276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=26 name=(null) inode=17273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=27 name=(null) inode=17277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=28 name=(null) inode=17273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=29 name=(null) inode=17278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=30 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=31 name=(null) inode=17279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=32 name=(null) inode=17279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=33 name=(null) inode=17280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=34 name=(null) inode=17279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=35 name=(null) inode=17281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=36 name=(null) inode=17279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=37 name=(null) inode=17282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=38 name=(null) inode=17279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=39 name=(null) inode=17283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=40 name=(null) inode=17279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=41 name=(null) inode=17284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=42 name=(null) inode=17264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=43 name=(null) inode=17285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=44 name=(null) inode=17285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=45 name=(null) inode=17286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=46 name=(null) inode=17285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=47 name=(null) inode=17287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=48 name=(null) inode=17285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=49 name=(null) inode=17288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=50 name=(null) inode=17285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=51 name=(null) inode=17289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=52 name=(null) inode=17285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=53 name=(null) inode=17290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=55 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=56 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=57 name=(null) inode=17292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=58 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=59 name=(null) inode=17293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=60 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=61 name=(null) inode=17294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=62 name=(null) inode=17294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=63 name=(null) inode=17295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=64 name=(null) inode=17294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=65 name=(null) inode=17296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=66 name=(null) inode=17294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=67 name=(null) inode=17297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=68 name=(null) inode=17294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=69 name=(null) inode=17298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=70 name=(null) inode=17294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=71 name=(null) inode=17299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=72 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=73 name=(null) inode=17300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=74 name=(null) inode=17300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=75 name=(null) inode=17301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=76 name=(null) inode=17300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=77 name=(null) inode=17302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=78 name=(null) inode=17300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=79 name=(null) inode=17303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=80 name=(null) inode=17300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=81 name=(null) inode=17304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=82 name=(null) inode=17300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=83 name=(null) inode=17305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=84 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=85 name=(null) inode=17306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=86 name=(null) inode=17306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=87 name=(null) inode=17307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=88 name=(null) inode=17306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=89 name=(null) inode=17308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=90 name=(null) inode=17306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=91 name=(null) inode=17309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=92 name=(null) inode=17306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=93 name=(null) inode=17310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=94 name=(null) inode=17306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.792490 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 9 19:50:46.773000 audit: PATH item=95 name=(null) inode=17311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=96 name=(null) inode=17291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=97 name=(null) inode=17312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=98 name=(null) inode=17312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=99 name=(null) inode=17313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=100 name=(null) inode=17312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=101 name=(null) inode=17314 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=102 name=(null) inode=17312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=103 name=(null) inode=17315 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=104 name=(null) inode=17312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=105 name=(null) inode=17316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=106 name=(null) inode=17312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PATH item=107 name=(null) inode=17317 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:46.773000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:50:46.806499 kernel: Guest personality initialized and is active Feb 9 19:50:46.808495 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 9 19:50:46.808536 kernel: Initialized host personality Feb 9 19:50:46.818508 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:50:46.823503 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:50:46.844499 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1117) Feb 9 19:50:46.853129 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:50:46.854847 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:50:46.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.855851 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:50:46.883595 lvm[1144]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:50:46.908115 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:50:46.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.908302 systemd[1]: Reached target cryptsetup.target. Feb 9 19:50:46.909333 systemd[1]: Starting lvm2-activation.service... Feb 9 19:50:46.912382 lvm[1146]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:50:46.940161 systemd[1]: Finished lvm2-activation.service. Feb 9 19:50:46.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.940331 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:50:46.940427 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:50:46.940439 systemd[1]: Reached target local-fs.target. Feb 9 19:50:46.940536 systemd[1]: Reached target machines.target. Feb 9 19:50:46.941517 systemd[1]: Starting ldconfig.service... Feb 9 19:50:46.948644 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:50:46.948677 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:50:46.949624 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:50:46.950302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:50:46.951196 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:50:46.951347 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:50:46.951374 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:50:46.952093 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:50:46.963494 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1149 (bootctl) Feb 9 19:50:46.964253 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:50:46.965139 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:50:46.980796 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:50:46.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:46.985420 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:50:46.991697 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:50:47.339719 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:50:47.340183 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:50:47.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.412768 systemd-fsck[1158]: fsck.fat 4.2 (2021-01-31) Feb 9 19:50:47.412768 systemd-fsck[1158]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:50:47.416381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:50:47.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.417495 systemd[1]: Mounting boot.mount... Feb 9 19:50:47.431325 systemd[1]: Mounted boot.mount. Feb 9 19:50:47.443763 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:50:47.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.540956 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:50:47.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.542192 systemd[1]: Starting audit-rules.service... Feb 9 19:50:47.543482 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:50:47.545223 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:50:47.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.548543 systemd[1]: Starting systemd-resolved.service... Feb 9 19:50:47.550204 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:50:47.551744 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:50:47.552416 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:50:47.553412 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:50:47.555000 audit[1172]: SYSTEM_BOOT pid=1172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.564016 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:50:47.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.577397 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:50:47.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:47.589197 augenrules[1189]: No rules Feb 9 19:50:47.587000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:50:47.587000 audit[1189]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff36bd5e00 a2=420 a3=0 items=0 ppid=1166 pid=1189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:47.587000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:50:47.589846 systemd[1]: Finished audit-rules.service. Feb 9 19:50:47.634942 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:50:47.635140 systemd[1]: Reached target time-set.target. Feb 9 19:50:47.646724 systemd-resolved[1169]: Positive Trust Anchors: Feb 9 19:50:47.646732 systemd-resolved[1169]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:50:47.646750 systemd-resolved[1169]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:50:47.672351 systemd-resolved[1169]: Defaulting to hostname 'linux'. Feb 9 19:50:47.673517 systemd[1]: Started systemd-resolved.service. Feb 9 19:50:47.673700 systemd[1]: Reached target network.target. Feb 9 19:50:47.673799 systemd[1]: Reached target nss-lookup.target. Feb 9 19:51:32.081178 systemd-resolved[1169]: Clock change detected. Flushing caches. Feb 9 19:51:32.081384 systemd-timesyncd[1171]: Contacted time server 147.182.158.78:123 (0.flatcar.pool.ntp.org). Feb 9 19:51:32.081673 systemd-timesyncd[1171]: Initial clock synchronization to Fri 2024-02-09 19:51:32.081131 UTC. Feb 9 19:51:32.218346 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:51:32.234644 systemd[1]: Finished ldconfig.service. Feb 9 19:51:32.235775 systemd[1]: Starting systemd-update-done.service... Feb 9 19:51:32.240123 systemd[1]: Finished systemd-update-done.service. Feb 9 19:51:32.240295 systemd[1]: Reached target sysinit.target. Feb 9 19:51:32.240435 systemd[1]: Started motdgen.path. Feb 9 19:51:32.240543 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:51:32.240720 systemd[1]: Started logrotate.timer. Feb 9 19:51:32.240841 systemd[1]: Started mdadm.timer. Feb 9 19:51:32.240925 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:51:32.241018 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:51:32.241042 systemd[1]: Reached target paths.target. Feb 9 19:51:32.241122 systemd[1]: Reached target timers.target. Feb 9 19:51:32.241362 systemd[1]: Listening on dbus.socket. Feb 9 19:51:32.242269 systemd[1]: Starting docker.socket... Feb 9 19:51:32.243375 systemd[1]: Listening on sshd.socket. Feb 9 19:51:32.243522 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:51:32.243812 systemd[1]: Listening on docker.socket. Feb 9 19:51:32.243907 systemd[1]: Reached target sockets.target. Feb 9 19:51:32.244001 systemd[1]: Reached target basic.target. Feb 9 19:51:32.244159 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:51:32.244185 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:51:32.244199 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:51:32.244961 systemd[1]: Starting containerd.service... Feb 9 19:51:32.246155 systemd[1]: Starting dbus.service... Feb 9 19:51:32.247070 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:51:32.247999 systemd[1]: Starting extend-filesystems.service... Feb 9 19:51:32.251160 jq[1203]: false Feb 9 19:51:32.248132 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:51:32.248938 systemd[1]: Starting motdgen.service... Feb 9 19:51:32.249855 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:51:32.250764 systemd[1]: Starting prepare-critools.service... Feb 9 19:51:32.252137 systemd[1]: Starting prepare-helm.service... Feb 9 19:51:32.254546 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:51:32.263011 systemd[1]: Starting sshd-keygen.service... Feb 9 19:51:32.266724 systemd[1]: Starting systemd-logind.service... Feb 9 19:51:32.266845 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:51:32.266876 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:51:32.267721 systemd[1]: Starting update-engine.service... Feb 9 19:51:32.272599 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:51:32.273528 systemd[1]: Starting vmtoolsd.service... Feb 9 19:51:32.274798 jq[1226]: true Feb 9 19:51:32.276281 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:51:32.278211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:51:32.283102 jq[1233]: true Feb 9 19:51:32.284047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:51:32.284169 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda1 Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda2 Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda3 Feb 9 19:51:32.292630 extend-filesystems[1204]: Found usr Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda4 Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda6 Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda7 Feb 9 19:51:32.292630 extend-filesystems[1204]: Found sda9 Feb 9 19:51:32.292630 extend-filesystems[1204]: Checking size of /dev/sda9 Feb 9 19:51:32.293375 systemd[1]: Started vmtoolsd.service. Feb 9 19:51:32.337449 tar[1230]: ./ Feb 9 19:51:32.337449 tar[1230]: ./macvlan Feb 9 19:51:32.340429 tar[1231]: crictl Feb 9 19:51:32.317892 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:51:32.343811 tar[1232]: linux-amd64/helm Feb 9 19:51:32.344855 extend-filesystems[1204]: Old size kept for /dev/sda9 Feb 9 19:51:32.344855 extend-filesystems[1204]: Found sr0 Feb 9 19:51:32.352283 env[1235]: time="2024-02-09T19:51:32.341037702Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:51:32.319462 systemd[1]: Finished extend-filesystems.service. Feb 9 19:51:32.345462 dbus-daemon[1202]: [system] SELinux support is enabled Feb 9 19:51:32.329138 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:51:32.329264 systemd[1]: Finished motdgen.service. Feb 9 19:51:32.345581 systemd[1]: Started dbus.service. Feb 9 19:51:32.346902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:51:32.346947 systemd[1]: Reached target system-config.target. Feb 9 19:51:32.347096 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:51:32.347106 systemd[1]: Reached target user-config.target. Feb 9 19:51:32.358408 kernel: NET: Registered PF_VSOCK protocol family Feb 9 19:51:32.380208 env[1235]: time="2024-02-09T19:51:32.380174341Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:51:32.380292 env[1235]: time="2024-02-09T19:51:32.380263968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381016 env[1235]: time="2024-02-09T19:51:32.380994933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381016 env[1235]: time="2024-02-09T19:51:32.381011614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381144 env[1235]: time="2024-02-09T19:51:32.381129351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381144 env[1235]: time="2024-02-09T19:51:32.381142192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381191 env[1235]: time="2024-02-09T19:51:32.381150417Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:51:32.381191 env[1235]: time="2024-02-09T19:51:32.381156074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381234 env[1235]: time="2024-02-09T19:51:32.381195166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381331 env[1235]: time="2024-02-09T19:51:32.381317968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381412 env[1235]: time="2024-02-09T19:51:32.381398574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:51:32.381412 env[1235]: time="2024-02-09T19:51:32.381409748Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:51:32.381461 env[1235]: time="2024-02-09T19:51:32.381435565Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:51:32.381461 env[1235]: time="2024-02-09T19:51:32.381443902Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:51:32.396426 env[1235]: time="2024-02-09T19:51:32.396346149Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:51:32.396426 env[1235]: time="2024-02-09T19:51:32.396377725Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:51:32.396426 env[1235]: time="2024-02-09T19:51:32.396386534Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:51:32.396426 env[1235]: time="2024-02-09T19:51:32.396427251Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396439973Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396448424Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396455230Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396463246Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396471451Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396488092Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396497941Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396568 env[1235]: time="2024-02-09T19:51:32.396505960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:51:32.396696 env[1235]: time="2024-02-09T19:51:32.396585470Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:51:32.396696 env[1235]: time="2024-02-09T19:51:32.396644513Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:51:32.396875 env[1235]: time="2024-02-09T19:51:32.396861384Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:51:32.396905 env[1235]: time="2024-02-09T19:51:32.396880419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396905 env[1235]: time="2024-02-09T19:51:32.396889070Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:51:32.396942 env[1235]: time="2024-02-09T19:51:32.396914354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396942 env[1235]: time="2024-02-09T19:51:32.396930296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396942 env[1235]: time="2024-02-09T19:51:32.396937723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396985 env[1235]: time="2024-02-09T19:51:32.396944218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396985 env[1235]: time="2024-02-09T19:51:32.396951715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396985 env[1235]: time="2024-02-09T19:51:32.396958460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396985 env[1235]: time="2024-02-09T19:51:32.396965418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396985 env[1235]: time="2024-02-09T19:51:32.396971472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.396985 env[1235]: time="2024-02-09T19:51:32.396978407Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:51:32.397087 env[1235]: time="2024-02-09T19:51:32.397055183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.397123 env[1235]: time="2024-02-09T19:51:32.397086230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.397123 env[1235]: time="2024-02-09T19:51:32.397094973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.397123 env[1235]: time="2024-02-09T19:51:32.397101612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:51:32.397123 env[1235]: time="2024-02-09T19:51:32.397110615Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:51:32.397123 env[1235]: time="2024-02-09T19:51:32.397116873Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:51:32.397200 env[1235]: time="2024-02-09T19:51:32.397127239Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:51:32.397200 env[1235]: time="2024-02-09T19:51:32.397159226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:51:32.397323 env[1235]: time="2024-02-09T19:51:32.397282049Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.397325963Z" level=info msg="Connect containerd service" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.397353097Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.397690665Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.397854121Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.397918435Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.398156512Z" level=info msg="Start subscribing containerd event" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.398181959Z" level=info msg="Start recovering state" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.398209758Z" level=info msg="Start event monitor" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.398227156Z" level=info msg="Start snapshots syncer" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.398234392Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:51:32.403611 env[1235]: time="2024-02-09T19:51:32.398238590Z" level=info msg="Start streaming server" Feb 9 19:51:32.398005 systemd[1]: Started containerd.service. Feb 9 19:51:32.403857 env[1235]: time="2024-02-09T19:51:32.403620059Z" level=info msg="containerd successfully booted in 0.072334s" Feb 9 19:51:32.405279 update_engine[1220]: I0209 19:51:32.404469 1220 main.cc:92] Flatcar Update Engine starting Feb 9 19:51:32.411015 update_engine[1220]: I0209 19:51:32.408804 1220 update_check_scheduler.cc:74] Next update check in 4m32s Feb 9 19:51:32.407039 systemd[1]: Started update-engine.service. Feb 9 19:51:32.408314 systemd[1]: Started locksmithd.service. Feb 9 19:51:32.416790 bash[1280]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:51:32.416973 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:51:32.437327 systemd-logind[1219]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:51:32.437342 systemd-logind[1219]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:51:32.437450 systemd-logind[1219]: New seat seat0. Feb 9 19:51:32.440499 systemd[1]: Started systemd-logind.service. Feb 9 19:51:32.460640 tar[1230]: ./static Feb 9 19:51:32.504919 tar[1230]: ./vlan Feb 9 19:51:32.562209 tar[1230]: ./portmap Feb 9 19:51:32.589730 systemd-networkd[1111]: ens192: Gained IPv6LL Feb 9 19:51:32.607010 tar[1230]: ./host-local Feb 9 19:51:32.648792 tar[1230]: ./vrf Feb 9 19:51:32.693049 tar[1230]: ./bridge Feb 9 19:51:32.744792 tar[1230]: ./tuning Feb 9 19:51:32.782674 tar[1230]: ./firewall Feb 9 19:51:32.838086 tar[1230]: ./host-device Feb 9 19:51:32.870148 tar[1230]: ./sbr Feb 9 19:51:32.891168 tar[1230]: ./loopback Feb 9 19:51:32.919609 tar[1230]: ./dhcp Feb 9 19:51:32.991219 systemd[1]: Finished prepare-critools.service. Feb 9 19:51:32.996496 tar[1232]: linux-amd64/LICENSE Feb 9 19:51:32.996560 tar[1232]: linux-amd64/README.md Feb 9 19:51:33.000343 systemd[1]: Finished prepare-helm.service. Feb 9 19:51:33.025936 tar[1230]: ./ptp Feb 9 19:51:33.050211 tar[1230]: ./ipvlan Feb 9 19:51:33.073684 tar[1230]: ./bandwidth Feb 9 19:51:33.130927 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:51:33.410768 sshd_keygen[1224]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:51:33.426305 systemd[1]: Finished sshd-keygen.service. Feb 9 19:51:33.427630 systemd[1]: Starting issuegen.service... Feb 9 19:51:33.431780 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:51:33.431900 systemd[1]: Finished issuegen.service. Feb 9 19:51:33.433129 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:51:33.440933 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:51:33.441935 systemd[1]: Started getty@tty1.service. Feb 9 19:51:33.442878 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:51:33.443084 systemd[1]: Reached target getty.target. Feb 9 19:51:33.443260 systemd[1]: Reached target multi-user.target. Feb 9 19:51:33.444428 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:51:33.449759 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:51:33.449876 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:51:33.452708 systemd[1]: Startup finished in 6.608s (kernel) + 5.901s (userspace) = 12.510s. Feb 9 19:51:33.491428 locksmithd[1284]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:51:33.589839 login[1363]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:51:33.590336 login[1362]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:51:33.597268 systemd[1]: Created slice user-500.slice. Feb 9 19:51:33.597987 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:51:33.602405 systemd-logind[1219]: New session 2 of user core. Feb 9 19:51:33.605701 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:51:33.606627 systemd[1]: Starting user@500.service... Feb 9 19:51:33.610340 (systemd)[1368]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:33.690811 systemd[1368]: Queued start job for default target default.target. Feb 9 19:51:33.691186 systemd[1368]: Reached target paths.target. Feb 9 19:51:33.691257 systemd[1368]: Reached target sockets.target. Feb 9 19:51:33.691364 systemd[1368]: Reached target timers.target. Feb 9 19:51:33.691427 systemd[1368]: Reached target basic.target. Feb 9 19:51:33.691506 systemd[1368]: Reached target default.target. Feb 9 19:51:33.691566 systemd[1]: Started user@500.service. Feb 9 19:51:33.691651 systemd[1368]: Startup finished in 77ms. Feb 9 19:51:33.692308 systemd[1]: Started session-2.scope. Feb 9 19:51:34.591657 login[1363]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:51:34.595331 systemd[1]: Started session-1.scope. Feb 9 19:51:34.595530 systemd-logind[1219]: New session 1 of user core. Feb 9 19:52:12.484756 systemd[1]: Created slice system-sshd.slice. Feb 9 19:52:12.485610 systemd[1]: Started sshd@0-139.178.70.110:22-139.178.89.65:59802.service. Feb 9 19:52:12.524665 sshd[1391]: Accepted publickey for core from 139.178.89.65 port 59802 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:12.525361 sshd[1391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:12.528234 systemd[1]: Started session-3.scope. Feb 9 19:52:12.528881 systemd-logind[1219]: New session 3 of user core. Feb 9 19:52:12.575452 systemd[1]: Started sshd@1-139.178.70.110:22-139.178.89.65:59818.service. Feb 9 19:52:12.605446 sshd[1396]: Accepted publickey for core from 139.178.89.65 port 59818 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:12.606206 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:12.608673 systemd-logind[1219]: New session 4 of user core. Feb 9 19:52:12.608892 systemd[1]: Started session-4.scope. Feb 9 19:52:12.657765 sshd[1396]: pam_unix(sshd:session): session closed for user core Feb 9 19:52:12.659202 systemd[1]: Started sshd@2-139.178.70.110:22-139.178.89.65:59832.service. Feb 9 19:52:12.661706 systemd[1]: sshd@1-139.178.70.110:22-139.178.89.65:59818.service: Deactivated successfully. Feb 9 19:52:12.663420 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:52:12.663740 systemd-logind[1219]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:52:12.664519 systemd-logind[1219]: Removed session 4. Feb 9 19:52:12.689738 sshd[1401]: Accepted publickey for core from 139.178.89.65 port 59832 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:12.690456 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:12.693269 systemd[1]: Started session-5.scope. Feb 9 19:52:12.693452 systemd-logind[1219]: New session 5 of user core. Feb 9 19:52:12.740600 sshd[1401]: pam_unix(sshd:session): session closed for user core Feb 9 19:52:12.742056 systemd[1]: Started sshd@3-139.178.70.110:22-139.178.89.65:59848.service. Feb 9 19:52:12.743775 systemd[1]: sshd@2-139.178.70.110:22-139.178.89.65:59832.service: Deactivated successfully. Feb 9 19:52:12.744595 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:52:12.744875 systemd-logind[1219]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:52:12.745559 systemd-logind[1219]: Removed session 5. Feb 9 19:52:12.771876 sshd[1408]: Accepted publickey for core from 139.178.89.65 port 59848 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:12.772894 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:12.775548 systemd[1]: Started session-6.scope. Feb 9 19:52:12.775764 systemd-logind[1219]: New session 6 of user core. Feb 9 19:52:12.825196 sshd[1408]: pam_unix(sshd:session): session closed for user core Feb 9 19:52:12.826554 systemd[1]: Started sshd@4-139.178.70.110:22-139.178.89.65:59864.service. Feb 9 19:52:12.827916 systemd[1]: sshd@3-139.178.70.110:22-139.178.89.65:59848.service: Deactivated successfully. Feb 9 19:52:12.828549 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:52:12.828812 systemd-logind[1219]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:52:12.829279 systemd-logind[1219]: Removed session 6. Feb 9 19:52:12.858093 sshd[1415]: Accepted publickey for core from 139.178.89.65 port 59864 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:12.859015 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:12.861574 systemd[1]: Started session-7.scope. Feb 9 19:52:12.861751 systemd-logind[1219]: New session 7 of user core. Feb 9 19:52:12.918973 sudo[1421]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:52:12.919308 sudo[1421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:52:12.926592 dbus-daemon[1202]: \xd0]\x8dw\xcdU: received setenforce notice (enforcing=-850282080) Feb 9 19:52:12.926980 sudo[1421]: pam_unix(sudo:session): session closed for user root Feb 9 19:52:12.928638 sshd[1415]: pam_unix(sshd:session): session closed for user core Feb 9 19:52:12.930269 systemd[1]: Started sshd@5-139.178.70.110:22-139.178.89.65:59876.service. Feb 9 19:52:12.931917 systemd[1]: sshd@4-139.178.70.110:22-139.178.89.65:59864.service: Deactivated successfully. Feb 9 19:52:12.932296 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:52:12.932620 systemd-logind[1219]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:52:12.933046 systemd-logind[1219]: Removed session 7. Feb 9 19:52:12.961635 sshd[1423]: Accepted publickey for core from 139.178.89.65 port 59876 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:12.962518 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:12.965934 systemd[1]: Started session-8.scope. Feb 9 19:52:12.966090 systemd-logind[1219]: New session 8 of user core. Feb 9 19:52:13.016563 sudo[1430]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:52:13.016715 sudo[1430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:52:13.019008 sudo[1430]: pam_unix(sudo:session): session closed for user root Feb 9 19:52:13.022732 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:52:13.022888 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:52:13.030116 systemd[1]: Stopping audit-rules.service... Feb 9 19:52:13.029000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:52:13.032203 kernel: kauditd_printk_skb: 209 callbacks suppressed Feb 9 19:52:13.032245 kernel: audit: type=1305 audit(1707508333.029:132): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:52:13.029000 audit[1433]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc1e8c5e0 a2=420 a3=0 items=0 ppid=1 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.038799 kernel: audit: type=1300 audit(1707508333.029:132): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc1e8c5e0 a2=420 a3=0 items=0 ppid=1 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.038834 kernel: audit: type=1327 audit(1707508333.029:132): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:52:13.029000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:52:13.040142 auditctl[1433]: No rules Feb 9 19:52:13.042191 kernel: audit: type=1131 audit(1707508333.039:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.040452 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:52:13.040624 systemd[1]: Stopped audit-rules.service. Feb 9 19:52:13.041839 systemd[1]: Starting audit-rules.service... Feb 9 19:52:13.053079 augenrules[1451]: No rules Feb 9 19:52:13.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.053503 systemd[1]: Finished audit-rules.service. Feb 9 19:52:13.056208 sudo[1429]: pam_unix(sudo:session): session closed for user root Feb 9 19:52:13.056630 kernel: audit: type=1130 audit(1707508333.052:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.056663 kernel: audit: type=1106 audit(1707508333.055:135): pid=1429 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.055000 audit[1429]: USER_END pid=1429 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.055000 audit[1429]: CRED_DISP pid=1429 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.060185 sshd[1423]: pam_unix(sshd:session): session closed for user core Feb 9 19:52:13.062128 kernel: audit: type=1104 audit(1707508333.055:136): pid=1429 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.063687 systemd[1]: Started sshd@6-139.178.70.110:22-139.178.89.65:59890.service. Feb 9 19:52:13.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.89.65:59890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.070021 kernel: audit: type=1130 audit(1707508333.062:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.89.65:59890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.069268 systemd[1]: sshd@5-139.178.70.110:22-139.178.89.65:59876.service: Deactivated successfully. Feb 9 19:52:13.069837 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:52:13.067000 audit[1423]: USER_END pid=1423 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.070602 systemd-logind[1219]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:52:13.077430 kernel: audit: type=1106 audit(1707508333.067:138): pid=1423 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.077466 kernel: audit: type=1104 audit(1707508333.067:139): pid=1423 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.067000 audit[1423]: CRED_DISP pid=1423 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.077763 systemd-logind[1219]: Removed session 8. Feb 9 19:52:13.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.110:22-139.178.89.65:59876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.094000 audit[1456]: USER_ACCT pid=1456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.096266 sshd[1456]: Accepted publickey for core from 139.178.89.65 port 59890 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:52:13.095000 audit[1456]: CRED_ACQ pid=1456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.095000 audit[1456]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff22cc9ce0 a2=3 a3=0 items=0 ppid=1 pid=1456 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.095000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:52:13.097030 sshd[1456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:52:13.099719 systemd[1]: Started session-9.scope. Feb 9 19:52:13.099838 systemd-logind[1219]: New session 9 of user core. Feb 9 19:52:13.100000 audit[1456]: USER_START pid=1456 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.101000 audit[1461]: CRED_ACQ pid=1461 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:13.147000 audit[1462]: USER_ACCT pid=1462 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.147000 audit[1462]: CRED_REFR pid=1462 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.149428 sudo[1462]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:52:13.149596 sudo[1462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:52:13.149000 audit[1462]: USER_START pid=1462 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.671696 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:52:13.675708 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:52:13.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.675911 systemd[1]: Reached target network-online.target. Feb 9 19:52:13.676952 systemd[1]: Starting docker.service... Feb 9 19:52:13.702160 env[1479]: time="2024-02-09T19:52:13.702133077Z" level=info msg="Starting up" Feb 9 19:52:13.703147 env[1479]: time="2024-02-09T19:52:13.703135714Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:52:13.703213 env[1479]: time="2024-02-09T19:52:13.703203719Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:52:13.703264 env[1479]: time="2024-02-09T19:52:13.703253560Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:52:13.703307 env[1479]: time="2024-02-09T19:52:13.703298470Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:52:13.706615 env[1479]: time="2024-02-09T19:52:13.706599860Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:52:13.706615 env[1479]: time="2024-02-09T19:52:13.706611538Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:52:13.706668 env[1479]: time="2024-02-09T19:52:13.706619737Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:52:13.706668 env[1479]: time="2024-02-09T19:52:13.706624535Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:52:13.709441 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1872376316-merged.mount: Deactivated successfully. Feb 9 19:52:13.722704 env[1479]: time="2024-02-09T19:52:13.722684819Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:52:13.722811 env[1479]: time="2024-02-09T19:52:13.722802291Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:52:13.722950 env[1479]: time="2024-02-09T19:52:13.722941911Z" level=info msg="Loading containers: start." Feb 9 19:52:13.753000 audit[1509]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.753000 audit[1509]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffa1892600 a2=0 a3=7fffa18925ec items=0 ppid=1479 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.753000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:52:13.755000 audit[1511]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.755000 audit[1511]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff70367630 a2=0 a3=7fff7036761c items=0 ppid=1479 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.755000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:52:13.756000 audit[1513]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.756000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe7a47ee00 a2=0 a3=7ffe7a47edec items=0 ppid=1479 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.756000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:52:13.757000 audit[1515]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.757000 audit[1515]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfe548110 a2=0 a3=7ffcfe5480fc items=0 ppid=1479 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.757000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:52:13.758000 audit[1517]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.758000 audit[1517]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6296fd40 a2=0 a3=7ffe6296fd2c items=0 ppid=1479 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.758000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:52:13.769000 audit[1522]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.769000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb5a683a0 a2=0 a3=7ffcb5a6838c items=0 ppid=1479 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.769000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:52:13.772000 audit[1524]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.772000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff7725c9b0 a2=0 a3=7fff7725c99c items=0 ppid=1479 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.772000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:52:13.773000 audit[1526]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.773000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc6d6267e0 a2=0 a3=7ffc6d6267cc items=0 ppid=1479 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.773000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:52:13.774000 audit[1528]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.774000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe67319220 a2=0 a3=7ffe6731920c items=0 ppid=1479 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.774000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:52:13.778000 audit[1532]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.778000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff3bef6dd0 a2=0 a3=7fff3bef6dbc items=0 ppid=1479 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.778000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:52:13.778000 audit[1533]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.778000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdf0a8f5c0 a2=0 a3=7ffdf0a8f5ac items=0 ppid=1479 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.778000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:52:13.786552 kernel: Initializing XFRM netlink socket Feb 9 19:52:13.808551 env[1479]: time="2024-02-09T19:52:13.808512007Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:52:13.820000 audit[1542]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.820000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffdb4d61d80 a2=0 a3=7ffdb4d61d6c items=0 ppid=1479 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.820000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:52:13.829000 audit[1545]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.829000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd7b9fa520 a2=0 a3=7ffd7b9fa50c items=0 ppid=1479 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.829000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:52:13.831000 audit[1548]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.831000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdd1369250 a2=0 a3=7ffdd136923c items=0 ppid=1479 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.831000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:52:13.832000 audit[1550]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.832000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc655ba6d0 a2=0 a3=7ffc655ba6bc items=0 ppid=1479 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.832000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:52:13.833000 audit[1552]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.833000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffec4ad65f0 a2=0 a3=7ffec4ad65dc items=0 ppid=1479 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.833000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:52:13.835000 audit[1554]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.835000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcd017ac10 a2=0 a3=7ffcd017abfc items=0 ppid=1479 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.835000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:52:13.836000 audit[1556]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.836000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe00beb9d0 a2=0 a3=7ffe00beb9bc items=0 ppid=1479 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.836000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:52:13.841000 audit[1559]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.841000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc03a2d430 a2=0 a3=7ffc03a2d41c items=0 ppid=1479 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.841000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:52:13.842000 audit[1561]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.842000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe465b8ac0 a2=0 a3=7ffe465b8aac items=0 ppid=1479 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:52:13.843000 audit[1563]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.843000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffb71cc7f0 a2=0 a3=7fffb71cc7dc items=0 ppid=1479 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.843000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:52:13.845000 audit[1565]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.845000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff2af96e00 a2=0 a3=7fff2af96dec items=0 ppid=1479 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.845000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:52:13.846816 systemd-networkd[1111]: docker0: Link UP Feb 9 19:52:13.849000 audit[1569]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.849000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffff030210 a2=0 a3=7fffff0301fc items=0 ppid=1479 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.849000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:52:13.850000 audit[1570]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:13.850000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcfb22c120 a2=0 a3=7ffcfb22c10c items=0 ppid=1479 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:13.850000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:52:13.851852 env[1479]: time="2024-02-09T19:52:13.851838251Z" level=info msg="Loading containers: done." Feb 9 19:52:13.858112 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1509890492-merged.mount: Deactivated successfully. Feb 9 19:52:13.866432 env[1479]: time="2024-02-09T19:52:13.866406543Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:52:13.866547 env[1479]: time="2024-02-09T19:52:13.866521873Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:52:13.866600 env[1479]: time="2024-02-09T19:52:13.866584877Z" level=info msg="Daemon has completed initialization" Feb 9 19:52:13.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:13.872405 systemd[1]: Started docker.service. Feb 9 19:52:13.877368 env[1479]: time="2024-02-09T19:52:13.877336365Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:52:13.888153 systemd[1]: Reloading. Feb 9 19:52:13.931865 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2024-02-09T19:52:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:52:13.932363 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2024-02-09T19:52:13Z" level=info msg="torcx already run" Feb 9 19:52:13.985239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:52:13.985252 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:52:13.996436 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:52:14.040920 systemd[1]: Started kubelet.service. Feb 9 19:52:14.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:14.077287 kubelet[1682]: E0209 19:52:14.077252 1682 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:52:14.078630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:52:14.078728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:52:14.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:52:14.608701 env[1235]: time="2024-02-09T19:52:14.608515589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:52:15.195442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065692714.mount: Deactivated successfully. Feb 9 19:52:16.584905 env[1235]: time="2024-02-09T19:52:16.584873164Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:16.586796 env[1235]: time="2024-02-09T19:52:16.586778438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:16.589271 env[1235]: time="2024-02-09T19:52:16.589248155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:16.592177 env[1235]: time="2024-02-09T19:52:16.592158926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:16.592799 env[1235]: time="2024-02-09T19:52:16.592774033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:52:16.599475 env[1235]: time="2024-02-09T19:52:16.599444764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:52:17.946769 update_engine[1220]: I0209 19:52:17.946558 1220 update_attempter.cc:509] Updating boot flags... Feb 9 19:52:18.403058 env[1235]: time="2024-02-09T19:52:18.403025793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:18.409737 env[1235]: time="2024-02-09T19:52:18.409712963Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:18.417503 env[1235]: time="2024-02-09T19:52:18.417477262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:18.427038 env[1235]: time="2024-02-09T19:52:18.427020624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:18.427572 env[1235]: time="2024-02-09T19:52:18.427554104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:52:18.435187 env[1235]: time="2024-02-09T19:52:18.435152746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:52:19.577024 env[1235]: time="2024-02-09T19:52:19.576991928Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:19.590858 env[1235]: time="2024-02-09T19:52:19.590838516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:19.596687 env[1235]: time="2024-02-09T19:52:19.596661073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:19.601753 env[1235]: time="2024-02-09T19:52:19.601736925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:19.602281 env[1235]: time="2024-02-09T19:52:19.602262667Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:52:19.608793 env[1235]: time="2024-02-09T19:52:19.608765733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:52:20.563382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245206426.mount: Deactivated successfully. Feb 9 19:52:21.133398 env[1235]: time="2024-02-09T19:52:21.133356868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.154599 env[1235]: time="2024-02-09T19:52:21.154571904Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.163005 env[1235]: time="2024-02-09T19:52:21.162985079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.167050 env[1235]: time="2024-02-09T19:52:21.167032606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.167267 env[1235]: time="2024-02-09T19:52:21.167247479Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:52:21.174495 env[1235]: time="2024-02-09T19:52:21.174467721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:52:21.807725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265770820.mount: Deactivated successfully. Feb 9 19:52:21.810083 env[1235]: time="2024-02-09T19:52:21.810052447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.810612 env[1235]: time="2024-02-09T19:52:21.810598521Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.815083 env[1235]: time="2024-02-09T19:52:21.815058977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.818762 env[1235]: time="2024-02-09T19:52:21.818746601Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:21.818917 env[1235]: time="2024-02-09T19:52:21.818900144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:52:21.825823 env[1235]: time="2024-02-09T19:52:21.825784011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:52:22.532622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277438.mount: Deactivated successfully. Feb 9 19:52:24.243092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:52:24.243270 systemd[1]: Stopped kubelet.service. Feb 9 19:52:24.253743 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 19:52:24.253820 kernel: audit: type=1130 audit(1707508344.241:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:24.253851 kernel: audit: type=1131 audit(1707508344.241:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:24.253869 kernel: audit: type=1130 audit(1707508344.243:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:24.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:24.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:24.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:24.244721 systemd[1]: Started kubelet.service. Feb 9 19:52:24.297261 kubelet[1742]: E0209 19:52:24.297228 1742 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:52:24.299582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:52:24.299670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:52:24.302677 kernel: audit: type=1131 audit(1707508344.298:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:52:24.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:52:26.371231 env[1235]: time="2024-02-09T19:52:26.371202935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:26.375952 env[1235]: time="2024-02-09T19:52:26.375935075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:26.378633 env[1235]: time="2024-02-09T19:52:26.378617911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:26.380726 env[1235]: time="2024-02-09T19:52:26.380711228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:26.381043 env[1235]: time="2024-02-09T19:52:26.381024447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:52:26.407210 env[1235]: time="2024-02-09T19:52:26.407181027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:52:26.929039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924665938.mount: Deactivated successfully. Feb 9 19:52:27.601673 env[1235]: time="2024-02-09T19:52:27.601643227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:27.623465 env[1235]: time="2024-02-09T19:52:27.623443698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:27.631063 env[1235]: time="2024-02-09T19:52:27.631039635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:27.640916 env[1235]: time="2024-02-09T19:52:27.640891590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:27.641218 env[1235]: time="2024-02-09T19:52:27.641196025Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:52:29.649065 systemd[1]: Stopped kubelet.service. Feb 9 19:52:29.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:29.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:29.658386 kernel: audit: type=1130 audit(1707508349.649:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:29.658419 kernel: audit: type=1131 audit(1707508349.654:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:29.665906 systemd[1]: Reloading. Feb 9 19:52:29.711886 /usr/lib/systemd/system-generators/torcx-generator[1834]: time="2024-02-09T19:52:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:52:29.711904 /usr/lib/systemd/system-generators/torcx-generator[1834]: time="2024-02-09T19:52:29Z" level=info msg="torcx already run" Feb 9 19:52:29.761128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:52:29.761144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:52:29.773692 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:52:29.823128 systemd[1]: Started kubelet.service. Feb 9 19:52:29.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:29.828588 kernel: audit: type=1130 audit(1707508349.822:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:29.855069 kubelet[1898]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:52:29.855069 kubelet[1898]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:52:29.855309 kubelet[1898]: I0209 19:52:29.855106 1898 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:52:29.856481 kubelet[1898]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:52:29.856481 kubelet[1898]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:52:30.119846 kubelet[1898]: I0209 19:52:30.119828 1898 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:52:30.119956 kubelet[1898]: I0209 19:52:30.119947 1898 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:52:30.120137 kubelet[1898]: I0209 19:52:30.120128 1898 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:52:30.133758 kubelet[1898]: E0209 19:52:30.133737 1898 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.133833 kubelet[1898]: I0209 19:52:30.133769 1898 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:52:30.134313 kubelet[1898]: I0209 19:52:30.134303 1898 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:52:30.134584 kubelet[1898]: I0209 19:52:30.134576 1898 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:52:30.134687 kubelet[1898]: I0209 19:52:30.134673 1898 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:52:30.134797 kubelet[1898]: I0209 19:52:30.134789 1898 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:52:30.134846 kubelet[1898]: I0209 19:52:30.134839 1898 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:52:30.135447 kubelet[1898]: I0209 19:52:30.135438 1898 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:52:30.142455 kubelet[1898]: I0209 19:52:30.142437 1898 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:52:30.142455 kubelet[1898]: I0209 19:52:30.142458 1898 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:52:30.142599 kubelet[1898]: I0209 19:52:30.142476 1898 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:52:30.142599 kubelet[1898]: I0209 19:52:30.142491 1898 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:52:30.144394 kubelet[1898]: I0209 19:52:30.144377 1898 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:52:30.144586 kubelet[1898]: W0209 19:52:30.144575 1898 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:52:30.145809 kubelet[1898]: I0209 19:52:30.145795 1898 server.go:1186] "Started kubelet" Feb 9 19:52:30.145901 kubelet[1898]: W0209 19:52:30.145878 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.145931 kubelet[1898]: E0209 19:52:30.145905 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.153323 kubelet[1898]: W0209 19:52:30.153298 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.153396 kubelet[1898]: E0209 19:52:30.153386 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.153554 kubelet[1898]: I0209 19:52:30.153529 1898 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:52:30.157577 kubelet[1898]: E0209 19:52:30.157507 1898 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b249cc3573d6ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 52, 30, 145779437, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 52, 30, 145779437, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.70.110:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.110:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:52:30.157697 kubelet[1898]: I0209 19:52:30.157687 1898 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:52:30.161000 audit[1898]: AVC avc: denied { mac_admin } for pid=1898 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:30.161993 kubelet[1898]: I0209 19:52:30.161983 1898 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:52:30.162069 kubelet[1898]: I0209 19:52:30.162060 1898 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:52:30.162166 kubelet[1898]: I0209 19:52:30.162158 1898 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:52:30.161000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:30.166476 kernel: audit: type=1400 audit(1707508350.161:184): avc: denied { mac_admin } for pid=1898 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:30.166517 kernel: audit: type=1401 audit(1707508350.161:184): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:30.166547 kernel: audit: type=1300 audit(1707508350.161:184): arch=c000003e syscall=188 success=no exit=-22 a0=c000ddab10 a1=c000d51f08 a2=c000ddaae0 a3=25 items=0 ppid=1 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.161000 audit[1898]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ddab10 a1=c000d51f08 a2=c000ddaae0 a3=25 items=0 ppid=1 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.166910 kubelet[1898]: E0209 19:52:30.166899 1898 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:52:30.166976 kubelet[1898]: E0209 19:52:30.166967 1898 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:52:30.167056 kubelet[1898]: I0209 19:52:30.167049 1898 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:52:30.161000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:30.172613 kubelet[1898]: I0209 19:52:30.172604 1898 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:52:30.172861 kubelet[1898]: W0209 19:52:30.172843 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.172922 kubelet[1898]: E0209 19:52:30.172914 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.173168 kubelet[1898]: E0209 19:52:30.173153 1898 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.174593 kernel: audit: type=1327 audit(1707508350.161:184): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:30.161000 audit[1898]: AVC avc: denied { mac_admin } for pid=1898 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:30.177662 kernel: audit: type=1400 audit(1707508350.161:185): avc: denied { mac_admin } for pid=1898 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:30.161000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:30.179142 kernel: audit: type=1401 audit(1707508350.161:185): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:30.179177 kernel: audit: type=1300 audit(1707508350.161:185): arch=c000003e syscall=188 success=no exit=-22 a0=c000d6b560 a1=c000d51f20 a2=c000ddaba0 a3=25 items=0 ppid=1 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.161000 audit[1898]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d6b560 a1=c000d51f20 a2=c000ddaba0 a3=25 items=0 ppid=1 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.161000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:30.165000 audit[1909]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.165000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe715b1b60 a2=0 a3=7ffe715b1b4c items=0 ppid=1898 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:52:30.173000 audit[1910]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.173000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb78758a0 a2=0 a3=7ffeb787588c items=0 ppid=1898 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:52:30.174000 audit[1912]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.174000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb8aa79a0 a2=0 a3=7fffb8aa798c items=0 ppid=1898 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:52:30.175000 audit[1914]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.175000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdb1e3f7e0 a2=0 a3=7ffdb1e3f7cc items=0 ppid=1898 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:52:30.205398 kubelet[1898]: I0209 19:52:30.205386 1898 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:52:30.205522 kubelet[1898]: I0209 19:52:30.205511 1898 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:52:30.205596 kubelet[1898]: I0209 19:52:30.205590 1898 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:52:30.205000 audit[1919]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.205000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc1f196830 a2=0 a3=7ffc1f19681c items=0 ppid=1898 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:52:30.206000 audit[1920]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.206000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3d498970 a2=0 a3=7ffc3d49895c items=0 ppid=1898 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:52:30.213915 kubelet[1898]: I0209 19:52:30.213906 1898 policy_none.go:49] "None policy: Start" Feb 9 19:52:30.214289 kubelet[1898]: I0209 19:52:30.214276 1898 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:52:30.214324 kubelet[1898]: I0209 19:52:30.214291 1898 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:52:30.218000 audit[1924]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.218000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd17c50e50 a2=0 a3=7ffd17c50e3c items=0 ppid=1898 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:52:30.223171 kubelet[1898]: I0209 19:52:30.223155 1898 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:52:30.222000 audit[1898]: AVC avc: denied { mac_admin } for pid=1898 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:30.222000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:30.222000 audit[1898]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000969a40 a1=c00093cc48 a2=c000969a10 a3=25 items=0 ppid=1 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.222000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:30.223315 kubelet[1898]: I0209 19:52:30.223203 1898 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:52:30.224245 kubelet[1898]: I0209 19:52:30.224238 1898 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:52:30.224388 kubelet[1898]: E0209 19:52:30.224380 1898 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 19:52:30.225000 audit[1928]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.225000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd183d9500 a2=0 a3=7ffd183d94ec items=0 ppid=1898 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:52:30.225000 audit[1929]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.225000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd764bf840 a2=0 a3=7ffd764bf82c items=0 ppid=1898 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:52:30.226000 audit[1930]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.226000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc39518b40 a2=0 a3=7ffc39518b2c items=0 ppid=1898 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:52:30.227000 audit[1932]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.227000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff77f7f2e0 a2=0 a3=7fff77f7f2cc items=0 ppid=1898 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:52:30.229000 audit[1934]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.229000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff49ae8680 a2=0 a3=7fff49ae866c items=0 ppid=1898 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:52:30.230000 audit[1936]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.230000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffda6535e10 a2=0 a3=7ffda6535dfc items=0 ppid=1898 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:52:30.231000 audit[1938]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.231000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fffbe7da9b0 a2=0 a3=7fffbe7da99c items=0 ppid=1898 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:52:30.233000 audit[1940]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.233000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe0203b2c0 a2=0 a3=7ffe0203b2ac items=0 ppid=1898 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:52:30.233917 kubelet[1898]: I0209 19:52:30.233910 1898 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:52:30.233000 audit[1941]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.233000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc95a9c1b0 a2=0 a3=7ffc95a9c19c items=0 ppid=1898 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:52:30.234000 audit[1942]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.234000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc58af1e50 a2=0 a3=7ffc58af1e3c items=0 ppid=1898 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:52:30.234000 audit[1943]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.234000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde195c8a0 a2=0 a3=7ffde195c88c items=0 ppid=1898 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:52:30.234000 audit[1944]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.234000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5bfd9f00 a2=0 a3=7ffd5bfd9eec items=0 ppid=1898 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:52:30.235000 audit[1946]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:30.235000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe60b9ec20 a2=0 a3=7ffe60b9ec0c items=0 ppid=1898 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:52:30.236000 audit[1947]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.236000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff999d7e00 a2=0 a3=7fff999d7dec items=0 ppid=1898 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:52:30.236000 audit[1948]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.236000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffdc5fcc1c0 a2=0 a3=7ffdc5fcc1ac items=0 ppid=1898 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:52:30.238000 audit[1950]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.238000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff3cacc140 a2=0 a3=7fff3cacc12c items=0 ppid=1898 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:52:30.238000 audit[1951]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.238000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedd0e2c00 a2=0 a3=7ffedd0e2bec items=0 ppid=1898 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:52:30.239000 audit[1952]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.239000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd0da79e0 a2=0 a3=7ffcd0da79cc items=0 ppid=1898 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:52:30.240000 audit[1954]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.240000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc82518ef0 a2=0 a3=7ffc82518edc items=0 ppid=1898 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:52:30.242000 audit[1956]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.242000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff4371cf30 a2=0 a3=7fff4371cf1c items=0 ppid=1898 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:52:30.243000 audit[1958]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.243000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd4d334f20 a2=0 a3=7ffd4d334f0c items=0 ppid=1898 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.243000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:52:30.244000 audit[1960]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.244000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff54dd7e20 a2=0 a3=7fff54dd7e0c items=0 ppid=1898 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:52:30.246000 audit[1962]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.246000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdcbb7f240 a2=0 a3=7ffdcbb7f22c items=0 ppid=1898 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:52:30.247726 kubelet[1898]: I0209 19:52:30.247714 1898 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:52:30.247804 kubelet[1898]: I0209 19:52:30.247795 1898 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:52:30.247870 kubelet[1898]: I0209 19:52:30.247861 1898 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:52:30.247954 kubelet[1898]: E0209 19:52:30.247945 1898 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:52:30.247000 audit[1963]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.247000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc22667e80 a2=0 a3=7ffc22667e6c items=0 ppid=1898 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.247000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:52:30.248694 kubelet[1898]: W0209 19:52:30.248667 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.248764 kubelet[1898]: E0209 19:52:30.248756 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.248000 audit[1964]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.248000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea6e1b030 a2=0 a3=7ffea6e1b01c items=0 ppid=1898 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:52:30.249000 audit[1965]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:30.249000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0de05d10 a2=0 a3=7fff0de05cfc items=0 ppid=1898 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:30.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:52:30.273805 kubelet[1898]: I0209 19:52:30.273791 1898 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:52:30.274120 kubelet[1898]: E0209 19:52:30.274107 1898 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Feb 9 19:52:30.348460 kubelet[1898]: I0209 19:52:30.348434 1898 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:30.349558 kubelet[1898]: I0209 19:52:30.349541 1898 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:30.353609 kubelet[1898]: I0209 19:52:30.353594 1898 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:30.354049 kubelet[1898]: I0209 19:52:30.354031 1898 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.110:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.110:6443: connect: connection refused" Feb 9 19:52:30.355509 kubelet[1898]: I0209 19:52:30.355485 1898 status_manager.go:698] "Failed to get status for pod" podUID=d527ad24deb995c3b6e9bc4ff884227a pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.110:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.110:6443: connect: connection refused" Feb 9 19:52:30.355800 kubelet[1898]: I0209 19:52:30.355784 1898 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.110:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.110:6443: connect: connection refused" Feb 9 19:52:30.373712 kubelet[1898]: E0209 19:52:30.373643 1898 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.475830 kubelet[1898]: I0209 19:52:30.475807 1898 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:52:30.476060 kubelet[1898]: E0209 19:52:30.476043 1898 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Feb 9 19:52:30.574436 kubelet[1898]: I0209 19:52:30.574410 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:30.574561 kubelet[1898]: I0209 19:52:30.574444 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d527ad24deb995c3b6e9bc4ff884227a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d527ad24deb995c3b6e9bc4ff884227a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:30.574561 kubelet[1898]: I0209 19:52:30.574461 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d527ad24deb995c3b6e9bc4ff884227a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d527ad24deb995c3b6e9bc4ff884227a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:30.574561 kubelet[1898]: I0209 19:52:30.574477 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d527ad24deb995c3b6e9bc4ff884227a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d527ad24deb995c3b6e9bc4ff884227a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:30.574561 kubelet[1898]: I0209 19:52:30.574494 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:30.574561 kubelet[1898]: I0209 19:52:30.574510 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:30.574697 kubelet[1898]: I0209 19:52:30.574525 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:30.574697 kubelet[1898]: I0209 19:52:30.574650 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:30.574697 kubelet[1898]: I0209 19:52:30.574670 1898 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:52:30.774447 kubelet[1898]: E0209 19:52:30.774389 1898 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.877814 kubelet[1898]: I0209 19:52:30.877796 1898 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:52:30.878191 kubelet[1898]: E0209 19:52:30.878182 1898 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Feb 9 19:52:30.955831 kubelet[1898]: W0209 19:52:30.955791 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.955831 kubelet[1898]: E0209 19:52:30.955834 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:30.958437 env[1235]: time="2024-02-09T19:52:30.958407614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:30.958987 env[1235]: time="2024-02-09T19:52:30.958969434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d527ad24deb995c3b6e9bc4ff884227a,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:30.960895 env[1235]: time="2024-02-09T19:52:30.960871374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:31.336026 kubelet[1898]: W0209 19:52:31.335986 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.336126 kubelet[1898]: E0209 19:52:31.336029 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.452673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252310150.mount: Deactivated successfully. Feb 9 19:52:31.473005 env[1235]: time="2024-02-09T19:52:31.472982029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.473756 env[1235]: time="2024-02-09T19:52:31.473740146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.474986 env[1235]: time="2024-02-09T19:52:31.474967009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.475741 env[1235]: time="2024-02-09T19:52:31.475720659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.476411 env[1235]: time="2024-02-09T19:52:31.476393249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.477037 env[1235]: time="2024-02-09T19:52:31.477016974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.479569 env[1235]: time="2024-02-09T19:52:31.479522625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.481939 env[1235]: time="2024-02-09T19:52:31.481917762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.482891 env[1235]: time="2024-02-09T19:52:31.482870386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.483816 env[1235]: time="2024-02-09T19:52:31.483800733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.484408 env[1235]: time="2024-02-09T19:52:31.484392264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.500899 env[1235]: time="2024-02-09T19:52:31.494940953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:31.500899 env[1235]: time="2024-02-09T19:52:31.494966249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:31.500899 env[1235]: time="2024-02-09T19:52:31.494974005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:31.500899 env[1235]: time="2024-02-09T19:52:31.495060850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ba03d725d3d888f1cb29f2216275dd9a368c473c74a6d682e3049e957fa6563 pid=1975 runtime=io.containerd.runc.v2 Feb 9 19:52:31.501376 env[1235]: time="2024-02-09T19:52:31.498416295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:31.501376 env[1235]: time="2024-02-09T19:52:31.498438285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:31.501376 env[1235]: time="2024-02-09T19:52:31.498445207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:31.501376 env[1235]: time="2024-02-09T19:52:31.498633379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/918920a2cac5be8d486820adfb5810b2a6010a6a60abf69519dc76c15022bfaa pid=1987 runtime=io.containerd.runc.v2 Feb 9 19:52:31.501943 env[1235]: time="2024-02-09T19:52:31.501924300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:31.555676 env[1235]: time="2024-02-09T19:52:31.555637521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:31.555774 env[1235]: time="2024-02-09T19:52:31.555662159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:31.555774 env[1235]: time="2024-02-09T19:52:31.555681221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:31.557621 env[1235]: time="2024-02-09T19:52:31.557597479Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31eed00d624aa2f435c8c412d66804382f61b9ab766bafa18b3e3b8aca59741b pid=2034 runtime=io.containerd.runc.v2 Feb 9 19:52:31.559378 env[1235]: time="2024-02-09T19:52:31.559358290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"918920a2cac5be8d486820adfb5810b2a6010a6a60abf69519dc76c15022bfaa\"" Feb 9 19:52:31.562252 env[1235]: time="2024-02-09T19:52:31.562237702Z" level=info msg="CreateContainer within sandbox \"918920a2cac5be8d486820adfb5810b2a6010a6a60abf69519dc76c15022bfaa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:52:31.575866 kubelet[1898]: E0209 19:52:31.575842 1898 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.576523 env[1235]: time="2024-02-09T19:52:31.576503449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d527ad24deb995c3b6e9bc4ff884227a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ba03d725d3d888f1cb29f2216275dd9a368c473c74a6d682e3049e957fa6563\"" Feb 9 19:52:31.586019 env[1235]: time="2024-02-09T19:52:31.585991825Z" level=info msg="CreateContainer within sandbox \"3ba03d725d3d888f1cb29f2216275dd9a368c473c74a6d682e3049e957fa6563\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:52:31.598276 env[1235]: time="2024-02-09T19:52:31.598218117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"31eed00d624aa2f435c8c412d66804382f61b9ab766bafa18b3e3b8aca59741b\"" Feb 9 19:52:31.600696 env[1235]: time="2024-02-09T19:52:31.600664199Z" level=info msg="CreateContainer within sandbox \"31eed00d624aa2f435c8c412d66804382f61b9ab766bafa18b3e3b8aca59741b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:52:31.601766 env[1235]: time="2024-02-09T19:52:31.601752670Z" level=info msg="CreateContainer within sandbox \"3ba03d725d3d888f1cb29f2216275dd9a368c473c74a6d682e3049e957fa6563\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc2cff4b76dc1599748a2073d0ac44c1aee38aa2c5d4d700a363e2920992dc6c\"" Feb 9 19:52:31.601952 env[1235]: time="2024-02-09T19:52:31.601938125Z" level=info msg="CreateContainer within sandbox \"918920a2cac5be8d486820adfb5810b2a6010a6a60abf69519dc76c15022bfaa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3ad8a5ceb435c005df176aa07c1be9a4259efd6b268ce85aa7864cc7275176d5\"" Feb 9 19:52:31.602114 env[1235]: time="2024-02-09T19:52:31.602094286Z" level=info msg="StartContainer for \"dc2cff4b76dc1599748a2073d0ac44c1aee38aa2c5d4d700a363e2920992dc6c\"" Feb 9 19:52:31.602822 env[1235]: time="2024-02-09T19:52:31.602806667Z" level=info msg="StartContainer for \"3ad8a5ceb435c005df176aa07c1be9a4259efd6b268ce85aa7864cc7275176d5\"" Feb 9 19:52:31.607779 env[1235]: time="2024-02-09T19:52:31.607751360Z" level=info msg="CreateContainer within sandbox \"31eed00d624aa2f435c8c412d66804382f61b9ab766bafa18b3e3b8aca59741b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b79d74d210fab0d81f03af1627b031246094fd37472e373cafd078b1ca896cfa\"" Feb 9 19:52:31.608078 env[1235]: time="2024-02-09T19:52:31.608057661Z" level=info msg="StartContainer for \"b79d74d210fab0d81f03af1627b031246094fd37472e373cafd078b1ca896cfa\"" Feb 9 19:52:31.618209 kubelet[1898]: W0209 19:52:31.618187 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.618209 kubelet[1898]: E0209 19:52:31.618209 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.675707 env[1235]: time="2024-02-09T19:52:31.674778880Z" level=info msg="StartContainer for \"3ad8a5ceb435c005df176aa07c1be9a4259efd6b268ce85aa7864cc7275176d5\" returns successfully" Feb 9 19:52:31.675795 kubelet[1898]: W0209 19:52:31.675444 1898 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.675795 kubelet[1898]: E0209 19:52:31.675524 1898 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:31.679261 kubelet[1898]: I0209 19:52:31.679103 1898 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:52:31.679261 kubelet[1898]: E0209 19:52:31.679252 1898 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Feb 9 19:52:31.707652 env[1235]: time="2024-02-09T19:52:31.707625858Z" level=info msg="StartContainer for \"b79d74d210fab0d81f03af1627b031246094fd37472e373cafd078b1ca896cfa\" returns successfully" Feb 9 19:52:31.707935 env[1235]: time="2024-02-09T19:52:31.707909668Z" level=info msg="StartContainer for \"dc2cff4b76dc1599748a2073d0ac44c1aee38aa2c5d4d700a363e2920992dc6c\" returns successfully" Feb 9 19:52:32.164674 kubelet[1898]: E0209 19:52:32.164652 1898 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.110:6443: connect: connection refused Feb 9 19:52:32.252719 kubelet[1898]: I0209 19:52:32.252696 1898 status_manager.go:698] "Failed to get status for pod" podUID=d527ad24deb995c3b6e9bc4ff884227a pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.110:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.110:6443: connect: connection refused" Feb 9 19:52:32.254031 kubelet[1898]: I0209 19:52:32.254010 1898 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.110:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.110:6443: connect: connection refused" Feb 9 19:52:32.344789 kubelet[1898]: I0209 19:52:32.344767 1898 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.110:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.110:6443: connect: connection refused" Feb 9 19:52:33.280598 kubelet[1898]: I0209 19:52:33.280577 1898 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:52:33.872289 kubelet[1898]: E0209 19:52:33.872264 1898 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 19:52:33.949441 kubelet[1898]: I0209 19:52:33.949418 1898 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:52:34.146083 kubelet[1898]: I0209 19:52:34.146017 1898 apiserver.go:52] "Watching apiserver" Feb 9 19:52:34.573245 kubelet[1898]: I0209 19:52:34.573232 1898 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:52:34.601611 kubelet[1898]: I0209 19:52:34.601600 1898 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:52:34.747381 kubelet[1898]: E0209 19:52:34.747358 1898 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:36.370432 systemd[1]: Reloading. Feb 9 19:52:36.432669 /usr/lib/systemd/system-generators/torcx-generator[2223]: time="2024-02-09T19:52:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:52:36.432686 /usr/lib/systemd/system-generators/torcx-generator[2223]: time="2024-02-09T19:52:36Z" level=info msg="torcx already run" Feb 9 19:52:36.488024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:52:36.488139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:52:36.499810 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:52:36.562650 systemd[1]: Stopping kubelet.service... Feb 9 19:52:36.580774 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:52:36.581023 systemd[1]: Stopped kubelet.service. Feb 9 19:52:36.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:36.581932 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:52:36.581972 kernel: audit: type=1131 audit(1707508356.580:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:36.582961 systemd[1]: Started kubelet.service. Feb 9 19:52:36.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:36.589554 kernel: audit: type=1130 audit(1707508356.583:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:36.654184 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:52:36.654184 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:52:36.654490 kubelet[2290]: I0209 19:52:36.654168 2290 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:52:36.655299 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:52:36.655299 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:52:36.657104 kubelet[2290]: I0209 19:52:36.657088 2290 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:52:36.657104 kubelet[2290]: I0209 19:52:36.657101 2290 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:52:36.657229 kubelet[2290]: I0209 19:52:36.657218 2290 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:52:36.657950 kubelet[2290]: I0209 19:52:36.657939 2290 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:52:36.659835 kubelet[2290]: I0209 19:52:36.659823 2290 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:52:36.660058 kubelet[2290]: I0209 19:52:36.660047 2290 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:52:36.660093 kubelet[2290]: I0209 19:52:36.660088 2290 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:52:36.660168 kubelet[2290]: I0209 19:52:36.660100 2290 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:52:36.660168 kubelet[2290]: I0209 19:52:36.660107 2290 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:52:36.661096 kubelet[2290]: I0209 19:52:36.660284 2290 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:52:36.661096 kubelet[2290]: I0209 19:52:36.660485 2290 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:52:36.662513 kubelet[2290]: I0209 19:52:36.662499 2290 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:52:36.662513 kubelet[2290]: I0209 19:52:36.662514 2290 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:52:36.662604 kubelet[2290]: I0209 19:52:36.662526 2290 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:52:36.662604 kubelet[2290]: I0209 19:52:36.662568 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:52:36.670457 kubelet[2290]: I0209 19:52:36.670442 2290 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:52:36.671258 kubelet[2290]: I0209 19:52:36.671246 2290 server.go:1186] "Started kubelet" Feb 9 19:52:36.671000 audit[2290]: AVC avc: denied { mac_admin } for pid=2290 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:36.674597 kubelet[2290]: I0209 19:52:36.672068 2290 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:52:36.674597 kubelet[2290]: I0209 19:52:36.672087 2290 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:52:36.674597 kubelet[2290]: I0209 19:52:36.672102 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:52:36.674597 kubelet[2290]: I0209 19:52:36.672375 2290 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:52:36.674597 kubelet[2290]: I0209 19:52:36.672868 2290 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:52:36.676549 kernel: audit: type=1400 audit(1707508356.671:222): avc: denied { mac_admin } for pid=2290 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:36.676590 kernel: audit: type=1401 audit(1707508356.671:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:36.676607 kernel: audit: type=1300 audit(1707508356.671:222): arch=c000003e syscall=188 success=no exit=-22 a0=c00060b2c0 a1=c0005e9668 a2=c00060b290 a3=25 items=0 ppid=1 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:36.671000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:36.671000 audit[2290]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00060b2c0 a1=c0005e9668 a2=c00060b290 a3=25 items=0 ppid=1 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:36.678938 kubelet[2290]: I0209 19:52:36.678929 2290 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:52:36.679126 kubelet[2290]: I0209 19:52:36.679118 2290 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:52:36.680588 kubelet[2290]: E0209 19:52:36.680580 2290 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:52:36.680646 kubelet[2290]: E0209 19:52:36.680639 2290 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:52:36.671000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:36.671000 audit[2290]: AVC avc: denied { mac_admin } for pid=2290 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:36.695583 kernel: audit: type=1327 audit(1707508356.671:222): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:36.695636 kernel: audit: type=1400 audit(1707508356.671:223): avc: denied { mac_admin } for pid=2290 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:36.695658 kernel: audit: type=1401 audit(1707508356.671:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:36.671000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:36.671000 audit[2290]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000eba860 a1=c0005e9680 a2=c00060b350 a3=25 items=0 ppid=1 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:36.700568 kernel: audit: type=1300 audit(1707508356.671:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000eba860 a1=c0005e9680 a2=c00060b350 a3=25 items=0 ppid=1 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:36.671000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:36.703929 kernel: audit: type=1327 audit(1707508356.671:223): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:36.737697 kubelet[2290]: I0209 19:52:36.737677 2290 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:52:36.749620 kubelet[2290]: I0209 19:52:36.749604 2290 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:52:36.749620 kubelet[2290]: I0209 19:52:36.749615 2290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:52:36.749620 kubelet[2290]: I0209 19:52:36.749624 2290 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:52:36.749906 kubelet[2290]: I0209 19:52:36.749895 2290 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:52:36.749906 kubelet[2290]: I0209 19:52:36.749906 2290 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:52:36.749959 kubelet[2290]: I0209 19:52:36.749910 2290 policy_none.go:49] "None policy: Start" Feb 9 19:52:36.750300 kubelet[2290]: I0209 19:52:36.750288 2290 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:52:36.750300 kubelet[2290]: I0209 19:52:36.750300 2290 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:52:36.750528 kubelet[2290]: I0209 19:52:36.750516 2290 state_mem.go:75] "Updated machine memory state" Feb 9 19:52:36.751222 kubelet[2290]: I0209 19:52:36.751210 2290 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:52:36.750000 audit[2290]: AVC avc: denied { mac_admin } for pid=2290 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:52:36.750000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:52:36.750000 audit[2290]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e3a4e0 a1=c001281ae8 a2=c000e3a4b0 a3=25 items=0 ppid=1 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:36.750000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:52:36.751375 kubelet[2290]: I0209 19:52:36.751253 2290 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:52:36.751375 kubelet[2290]: I0209 19:52:36.751346 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:52:36.761743 kubelet[2290]: I0209 19:52:36.761711 2290 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:52:36.761743 kubelet[2290]: I0209 19:52:36.761739 2290 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:52:36.761743 kubelet[2290]: I0209 19:52:36.761750 2290 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:52:36.761860 kubelet[2290]: E0209 19:52:36.761781 2290 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:52:36.780185 kubelet[2290]: I0209 19:52:36.780165 2290 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:52:36.784156 kubelet[2290]: I0209 19:52:36.784053 2290 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 19:52:36.784156 kubelet[2290]: I0209 19:52:36.784095 2290 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:52:36.862771 kubelet[2290]: I0209 19:52:36.862750 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:36.862907 kubelet[2290]: I0209 19:52:36.862898 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:36.862987 kubelet[2290]: I0209 19:52:36.862972 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:36.867421 kubelet[2290]: E0209 19:52:36.866849 2290 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 19:52:36.879425 kubelet[2290]: I0209 19:52:36.879409 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:36.879476 kubelet[2290]: I0209 19:52:36.879433 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:52:36.879476 kubelet[2290]: I0209 19:52:36.879446 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d527ad24deb995c3b6e9bc4ff884227a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d527ad24deb995c3b6e9bc4ff884227a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:36.879476 kubelet[2290]: I0209 19:52:36.879458 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d527ad24deb995c3b6e9bc4ff884227a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d527ad24deb995c3b6e9bc4ff884227a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:36.879476 kubelet[2290]: I0209 19:52:36.879470 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:36.879594 kubelet[2290]: I0209 19:52:36.879480 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:36.879594 kubelet[2290]: I0209 19:52:36.879493 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:36.879594 kubelet[2290]: I0209 19:52:36.879503 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d527ad24deb995c3b6e9bc4ff884227a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d527ad24deb995c3b6e9bc4ff884227a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:36.879594 kubelet[2290]: I0209 19:52:36.879516 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:37.070844 kubelet[2290]: E0209 19:52:37.070821 2290 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:37.670985 kubelet[2290]: I0209 19:52:37.670966 2290 apiserver.go:52] "Watching apiserver" Feb 9 19:52:37.879896 kubelet[2290]: I0209 19:52:37.879871 2290 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:52:37.892321 kubelet[2290]: I0209 19:52:37.892301 2290 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:52:38.268867 kubelet[2290]: E0209 19:52:38.268845 2290 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:52:38.465782 kubelet[2290]: E0209 19:52:38.465759 2290 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 19:52:38.666155 kubelet[2290]: E0209 19:52:38.666091 2290 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 19:52:38.937336 kubelet[2290]: I0209 19:52:38.937268 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.936586557 pod.CreationTimestamp="2024-02-09 19:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:52:38.936501616 +0000 UTC m=+2.341867411" watchObservedRunningTime="2024-02-09 19:52:38.936586557 +0000 UTC m=+2.341952341" Feb 9 19:52:40.067335 kubelet[2290]: I0209 19:52:40.067310 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.067281806 pod.CreationTimestamp="2024-02-09 19:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:52:39.667596621 +0000 UTC m=+3.072962417" watchObservedRunningTime="2024-02-09 19:52:40.067281806 +0000 UTC m=+3.472647594" Feb 9 19:52:41.376136 sudo[1462]: pam_unix(sudo:session): session closed for user root Feb 9 19:52:41.375000 audit[1462]: USER_END pid=1462 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:41.376000 audit[1462]: CRED_DISP pid=1462 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:52:41.400270 sshd[1456]: pam_unix(sshd:session): session closed for user core Feb 9 19:52:41.402000 audit[1456]: USER_END pid=1456 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:41.402000 audit[1456]: CRED_DISP pid=1456 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:52:41.408361 systemd-logind[1219]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:52:41.409440 systemd[1]: sshd@6-139.178.70.110:22-139.178.89.65:59890.service: Deactivated successfully. Feb 9 19:52:41.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.89.65:59890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:52:41.410115 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:52:41.411085 systemd-logind[1219]: Removed session 9. Feb 9 19:52:41.822245 kubelet[2290]: I0209 19:52:41.822226 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.822203811 pod.CreationTimestamp="2024-02-09 19:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:52:40.067749788 +0000 UTC m=+3.473115583" watchObservedRunningTime="2024-02-09 19:52:41.822203811 +0000 UTC m=+5.227569600" Feb 9 19:52:50.618187 kubelet[2290]: I0209 19:52:50.618144 2290 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:52:50.618468 env[1235]: time="2024-02-09T19:52:50.618394260Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:52:50.618648 kubelet[2290]: I0209 19:52:50.618497 2290 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:52:51.434289 kubelet[2290]: I0209 19:52:51.434258 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:51.475419 kubelet[2290]: I0209 19:52:51.475381 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d351370-c046-4ac0-ad4b-7c5dc8958e9b-kube-proxy\") pod \"kube-proxy-dk8cf\" (UID: \"3d351370-c046-4ac0-ad4b-7c5dc8958e9b\") " pod="kube-system/kube-proxy-dk8cf" Feb 9 19:52:51.475419 kubelet[2290]: I0209 19:52:51.475415 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d351370-c046-4ac0-ad4b-7c5dc8958e9b-lib-modules\") pod \"kube-proxy-dk8cf\" (UID: \"3d351370-c046-4ac0-ad4b-7c5dc8958e9b\") " pod="kube-system/kube-proxy-dk8cf" Feb 9 19:52:51.475598 kubelet[2290]: I0209 19:52:51.475434 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d351370-c046-4ac0-ad4b-7c5dc8958e9b-xtables-lock\") pod \"kube-proxy-dk8cf\" (UID: \"3d351370-c046-4ac0-ad4b-7c5dc8958e9b\") " pod="kube-system/kube-proxy-dk8cf" Feb 9 19:52:51.475598 kubelet[2290]: I0209 19:52:51.475454 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skc65\" (UniqueName: \"kubernetes.io/projected/3d351370-c046-4ac0-ad4b-7c5dc8958e9b-kube-api-access-skc65\") pod \"kube-proxy-dk8cf\" (UID: \"3d351370-c046-4ac0-ad4b-7c5dc8958e9b\") " pod="kube-system/kube-proxy-dk8cf" Feb 9 19:52:51.699068 kubelet[2290]: I0209 19:52:51.698980 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:51.705158 kubelet[2290]: W0209 19:52:51.705137 2290 reflector.go:424] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 9 19:52:51.705312 kubelet[2290]: E0209 19:52:51.705303 2290 reflector.go:140] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 9 19:52:51.705368 kubelet[2290]: W0209 19:52:51.705330 2290 reflector.go:424] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 9 19:52:51.705419 kubelet[2290]: E0209 19:52:51.705412 2290 reflector.go:140] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 9 19:52:51.737610 env[1235]: time="2024-02-09T19:52:51.737583319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dk8cf,Uid:3d351370-c046-4ac0-ad4b-7c5dc8958e9b,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:51.776918 kubelet[2290]: I0209 19:52:51.776900 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k7kg\" (UniqueName: \"kubernetes.io/projected/35972ce7-cddd-4b72-ab07-9e330d628383-kube-api-access-2k7kg\") pod \"tigera-operator-cfc98749c-86k69\" (UID: \"35972ce7-cddd-4b72-ab07-9e330d628383\") " pod="tigera-operator/tigera-operator-cfc98749c-86k69" Feb 9 19:52:51.777075 kubelet[2290]: I0209 19:52:51.777065 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35972ce7-cddd-4b72-ab07-9e330d628383-var-lib-calico\") pod \"tigera-operator-cfc98749c-86k69\" (UID: \"35972ce7-cddd-4b72-ab07-9e330d628383\") " pod="tigera-operator/tigera-operator-cfc98749c-86k69" Feb 9 19:52:51.859933 env[1235]: time="2024-02-09T19:52:51.859876009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:51.860072 env[1235]: time="2024-02-09T19:52:51.859916344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:51.860072 env[1235]: time="2024-02-09T19:52:51.859946706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:51.860251 env[1235]: time="2024-02-09T19:52:51.860220152Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31361273710273ed4f9b2ac0d307a69fd162f22d9001d059e4bf8ce70ea54639 pid=2394 runtime=io.containerd.runc.v2 Feb 9 19:52:51.894303 env[1235]: time="2024-02-09T19:52:51.894277933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dk8cf,Uid:3d351370-c046-4ac0-ad4b-7c5dc8958e9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"31361273710273ed4f9b2ac0d307a69fd162f22d9001d059e4bf8ce70ea54639\"" Feb 9 19:52:51.896081 env[1235]: time="2024-02-09T19:52:51.896064748Z" level=info msg="CreateContainer within sandbox \"31361273710273ed4f9b2ac0d307a69fd162f22d9001d059e4bf8ce70ea54639\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:52:52.005274 env[1235]: time="2024-02-09T19:52:52.005246621Z" level=info msg="CreateContainer within sandbox \"31361273710273ed4f9b2ac0d307a69fd162f22d9001d059e4bf8ce70ea54639\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fd11e5434886bb969a2a1b008b495058dab132e3c21ba98e840fdf30a50b3ac\"" Feb 9 19:52:52.005803 env[1235]: time="2024-02-09T19:52:52.005771925Z" level=info msg="StartContainer for \"8fd11e5434886bb969a2a1b008b495058dab132e3c21ba98e840fdf30a50b3ac\"" Feb 9 19:52:52.069735 env[1235]: time="2024-02-09T19:52:52.069701308Z" level=info msg="StartContainer for \"8fd11e5434886bb969a2a1b008b495058dab132e3c21ba98e840fdf30a50b3ac\" returns successfully" Feb 9 19:52:52.601278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721738442.mount: Deactivated successfully. Feb 9 19:52:52.890936 kubelet[2290]: E0209 19:52:52.890878 2290 projected.go:292] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:52:52.890936 kubelet[2290]: E0209 19:52:52.890901 2290 projected.go:198] Error preparing data for projected volume kube-api-access-2k7kg for pod tigera-operator/tigera-operator-cfc98749c-86k69: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:52:52.891177 kubelet[2290]: E0209 19:52:52.890954 2290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35972ce7-cddd-4b72-ab07-9e330d628383-kube-api-access-2k7kg podName:35972ce7-cddd-4b72-ab07-9e330d628383 nodeName:}" failed. No retries permitted until 2024-02-09 19:52:53.390934238 +0000 UTC m=+16.796300021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2k7kg" (UniqueName: "kubernetes.io/projected/35972ce7-cddd-4b72-ab07-9e330d628383-kube-api-access-2k7kg") pod "tigera-operator-cfc98749c-86k69" (UID: "35972ce7-cddd-4b72-ab07-9e330d628383") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:52:53.006991 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 19:52:53.007080 kernel: audit: type=1325 audit(1707508372.999:230): table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.007108 kernel: audit: type=1300 audit(1707508372.999:230): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc640b5e70 a2=0 a3=7ffc640b5e5c items=0 ppid=2445 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:52.999000 audit[2482]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:52.999000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc640b5e70 a2=0 a3=7ffc640b5e5c items=0 ppid=2445 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:52.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:52:53.000000 audit[2481]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.012755 kernel: audit: type=1327 audit(1707508372.999:230): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:52:53.012790 kernel: audit: type=1325 audit(1707508373.000:231): table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.000000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedddbb9e0 a2=0 a3=31030 items=0 ppid=2445 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.016651 kernel: audit: type=1300 audit(1707508373.000:231): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedddbb9e0 a2=0 a3=31030 items=0 ppid=2445 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:52:53.018716 kernel: audit: type=1327 audit(1707508373.000:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:52:53.018752 kernel: audit: type=1325 audit(1707508373.000:232): table=nat:61 family=10 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.000000 audit[2484]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.000000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc90858500 a2=0 a3=7ffc908584ec items=0 ppid=2445 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.024396 kernel: audit: type=1300 audit(1707508373.000:232): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc90858500 a2=0 a3=7ffc908584ec items=0 ppid=2445 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.024434 kernel: audit: type=1327 audit(1707508373.000:232): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:52:53.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:52:53.001000 audit[2485]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.028089 kernel: audit: type=1325 audit(1707508373.001:233): table=nat:62 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.001000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2fd23640 a2=0 a3=7ffe2fd2362c items=0 ppid=2445 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:52:53.002000 audit[2486]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.002000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd67884430 a2=0 a3=7ffd6788441c items=0 ppid=2445 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:52:53.003000 audit[2487]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.003000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8ef81940 a2=0 a3=7ffc8ef8192c items=0 ppid=2445 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:52:53.129000 audit[2488]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.129000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda9d01c20 a2=0 a3=7ffda9d01c0c items=0 ppid=2445 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:52:53.144000 audit[2490]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.144000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffed5974d20 a2=0 a3=7ffed5974d0c items=0 ppid=2445 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:52:53.149000 audit[2493]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.149000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb3a5c300 a2=0 a3=7ffdb3a5c2ec items=0 ppid=2445 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:52:53.150000 audit[2494]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.150000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb09025e0 a2=0 a3=7ffcb09025cc items=0 ppid=2445 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:52:53.151000 audit[2496]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.151000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe4d8b3c0 a2=0 a3=7fffe4d8b3ac items=0 ppid=2445 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:52:53.152000 audit[2497]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.152000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff749c5fc0 a2=0 a3=7fff749c5fac items=0 ppid=2445 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:52:53.154000 audit[2499]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.154000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeef9843c0 a2=0 a3=7ffeef9843ac items=0 ppid=2445 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:52:53.156000 audit[2502]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.156000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd15ad84d0 a2=0 a3=7ffd15ad84bc items=0 ppid=2445 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:52:53.156000 audit[2503]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.156000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb7e647a0 a2=0 a3=7ffcb7e6478c items=0 ppid=2445 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:52:53.158000 audit[2505]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.158000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff85d58270 a2=0 a3=7fff85d5825c items=0 ppid=2445 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:52:53.159000 audit[2506]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.159000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3f8e6fd0 a2=0 a3=7ffe3f8e6fbc items=0 ppid=2445 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:52:53.161000 audit[2508]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.161000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0b767850 a2=0 a3=7ffe0b76783c items=0 ppid=2445 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:52:53.163000 audit[2511]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.163000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffaa4557d0 a2=0 a3=7fffaa4557bc items=0 ppid=2445 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:52:53.165000 audit[2514]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.165000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdec6ee950 a2=0 a3=7ffdec6ee93c items=0 ppid=2445 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:52:53.166000 audit[2515]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.166000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc25d6470 a2=0 a3=7ffcc25d645c items=0 ppid=2445 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:52:53.173000 audit[2517]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.173000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffd7ce2980 a2=0 a3=7fffd7ce296c items=0 ppid=2445 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:52:53.175000 audit[2520]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:52:53.175000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe68856ac0 a2=0 a3=7ffe68856aac items=0 ppid=2445 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:52:53.286000 audit[2524]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:53.286000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffeb6030810 a2=0 a3=7ffeb60307fc items=0 ppid=2445 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:53.318000 audit[2524]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:53.318000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffeb6030810 a2=0 a3=7ffeb60307fc items=0 ppid=2445 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:53.319000 audit[2529]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.319000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffef153bf60 a2=0 a3=7ffef153bf4c items=0 ppid=2445 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:52:53.326000 audit[2531]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.326000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdde533c70 a2=0 a3=7ffdde533c5c items=0 ppid=2445 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:52:53.328000 audit[2534]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.328000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcaeba3b90 a2=0 a3=7ffcaeba3b7c items=0 ppid=2445 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:52:53.329000 audit[2535]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.329000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfe6b9d70 a2=0 a3=7ffcfe6b9d5c items=0 ppid=2445 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:52:53.331000 audit[2537]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.331000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff94440e20 a2=0 a3=7fff94440e0c items=0 ppid=2445 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:52:53.332000 audit[2538]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.332000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd04b200d0 a2=0 a3=7ffd04b200bc items=0 ppid=2445 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:52:53.334000 audit[2540]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.334000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe32b63d30 a2=0 a3=7ffe32b63d1c items=0 ppid=2445 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:52:53.336000 audit[2543]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.336000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeced8e4f0 a2=0 a3=7ffeced8e4dc items=0 ppid=2445 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:52:53.337000 audit[2544]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.337000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5806b710 a2=0 a3=7ffd5806b6fc items=0 ppid=2445 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:52:53.339000 audit[2546]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.339000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8c7b40e0 a2=0 a3=7ffe8c7b40cc items=0 ppid=2445 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:52:53.340000 audit[2547]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.340000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9a2ae8d0 a2=0 a3=7ffd9a2ae8bc items=0 ppid=2445 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:52:53.342000 audit[2549]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.342000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff92962780 a2=0 a3=7fff9296276c items=0 ppid=2445 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:52:53.345000 audit[2552]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.345000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8253a1f0 a2=0 a3=7fff8253a1dc items=0 ppid=2445 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:52:53.348000 audit[2555]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.348000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0b29e300 a2=0 a3=7ffc0b29e2ec items=0 ppid=2445 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:52:53.349000 audit[2556]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.349000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff486d25d0 a2=0 a3=7fff486d25bc items=0 ppid=2445 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:52:53.351000 audit[2558]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.351000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe5b558630 a2=0 a3=7ffe5b55861c items=0 ppid=2445 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:52:53.353000 audit[2561]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:52:53.353000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe3ffdf280 a2=0 a3=7ffe3ffdf26c items=0 ppid=2445 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:52:53.360000 audit[2565]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:52:53.360000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcfcefe1c0 a2=0 a3=7ffcfcefe1ac items=0 ppid=2445 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.360000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:53.360000 audit[2565]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:52:53.360000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffcfcefe1c0 a2=0 a3=7ffcfcefe1ac items=0 ppid=2445 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:53.360000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:53.501713 env[1235]: time="2024-02-09T19:52:53.501464348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-86k69,Uid:35972ce7-cddd-4b72-ab07-9e330d628383,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:52:53.528072 env[1235]: time="2024-02-09T19:52:53.527916885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:53.528072 env[1235]: time="2024-02-09T19:52:53.527950835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:53.528072 env[1235]: time="2024-02-09T19:52:53.527961863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:53.528223 env[1235]: time="2024-02-09T19:52:53.528182604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/991446c23a69aaf422d3709862aff05dc07c17076f9d3bd71f53b6932fa575c5 pid=2575 runtime=io.containerd.runc.v2 Feb 9 19:52:53.569973 env[1235]: time="2024-02-09T19:52:53.569943296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-86k69,Uid:35972ce7-cddd-4b72-ab07-9e330d628383,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"991446c23a69aaf422d3709862aff05dc07c17076f9d3bd71f53b6932fa575c5\"" Feb 9 19:52:53.574896 env[1235]: time="2024-02-09T19:52:53.574872731Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:52:54.885113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269576461.mount: Deactivated successfully. Feb 9 19:52:56.567549 env[1235]: time="2024-02-09T19:52:56.567503176Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:56.568762 env[1235]: time="2024-02-09T19:52:56.568747764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:56.570806 env[1235]: time="2024-02-09T19:52:56.570780018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:56.572145 env[1235]: time="2024-02-09T19:52:56.572123433Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:56.572809 env[1235]: time="2024-02-09T19:52:56.572789084Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 19:52:56.576132 env[1235]: time="2024-02-09T19:52:56.576099888Z" level=info msg="CreateContainer within sandbox \"991446c23a69aaf422d3709862aff05dc07c17076f9d3bd71f53b6932fa575c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:52:56.582491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824751247.mount: Deactivated successfully. Feb 9 19:52:56.585622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776778947.mount: Deactivated successfully. Feb 9 19:52:56.587382 env[1235]: time="2024-02-09T19:52:56.587356403Z" level=info msg="CreateContainer within sandbox \"991446c23a69aaf422d3709862aff05dc07c17076f9d3bd71f53b6932fa575c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3081175f7dd8a15003ab055eb3efdfb4f30490e35a0142a1a09d6b7c0fa0d1c3\"" Feb 9 19:52:56.589142 env[1235]: time="2024-02-09T19:52:56.589121454Z" level=info msg="StartContainer for \"3081175f7dd8a15003ab055eb3efdfb4f30490e35a0142a1a09d6b7c0fa0d1c3\"" Feb 9 19:52:56.631559 env[1235]: time="2024-02-09T19:52:56.629239264Z" level=info msg="StartContainer for \"3081175f7dd8a15003ab055eb3efdfb4f30490e35a0142a1a09d6b7c0fa0d1c3\" returns successfully" Feb 9 19:52:56.789006 kubelet[2290]: I0209 19:52:56.788986 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dk8cf" podStartSLOduration=5.788963964 pod.CreationTimestamp="2024-02-09 19:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:52:52.794885041 +0000 UTC m=+16.200250835" watchObservedRunningTime="2024-02-09 19:52:56.788963964 +0000 UTC m=+20.194329759" Feb 9 19:52:58.227000 audit[2670]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.228923 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 19:52:58.228965 kernel: audit: type=1325 audit(1707508378.227:274): table=filter:103 family=2 entries=13 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.227000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc46be4da0 a2=0 a3=7ffc46be4d8c items=0 ppid=2445 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.234795 kernel: audit: type=1300 audit(1707508378.227:274): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc46be4da0 a2=0 a3=7ffc46be4d8c items=0 ppid=2445 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.236722 kernel: audit: type=1327 audit(1707508378.227:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.227000 audit[2670]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.227000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc46be4da0 a2=0 a3=7ffc46be4d8c items=0 ppid=2445 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.243566 kernel: audit: type=1325 audit(1707508378.227:275): table=nat:104 family=2 entries=20 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.243621 kernel: audit: type=1300 audit(1707508378.227:275): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc46be4da0 a2=0 a3=7ffc46be4d8c items=0 ppid=2445 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.246287 kernel: audit: type=1327 audit(1707508378.227:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.264000 audit[2696]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.264000 audit[2696]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe0bd96110 a2=0 a3=7ffe0bd960fc items=0 ppid=2445 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.270932 kernel: audit: type=1325 audit(1707508378.264:276): table=filter:105 family=2 entries=14 op=nft_register_rule pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.270978 kernel: audit: type=1300 audit(1707508378.264:276): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe0bd96110 a2=0 a3=7ffe0bd960fc items=0 ppid=2445 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.270996 kernel: audit: type=1327 audit(1707508378.264:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.264000 audit[2696]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.264000 audit[2696]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe0bd96110 a2=0 a3=7ffe0bd960fc items=0 ppid=2445 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:58.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:58.278552 kernel: audit: type=1325 audit(1707508378.264:277): table=nat:106 family=2 entries=20 op=nft_register_rule pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:58.330192 kubelet[2290]: I0209 19:52:58.330167 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-86k69" podStartSLOduration=-9.223372029524633e+09 pod.CreationTimestamp="2024-02-09 19:52:51 +0000 UTC" firstStartedPulling="2024-02-09 19:52:53.570494246 +0000 UTC m=+16.975860029" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:52:56.799864671 +0000 UTC m=+20.205230466" watchObservedRunningTime="2024-02-09 19:52:58.330141896 +0000 UTC m=+21.735507690" Feb 9 19:52:58.330483 kubelet[2290]: I0209 19:52:58.330237 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:58.371085 kubelet[2290]: I0209 19:52:58.371059 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:58.419180 kubelet[2290]: I0209 19:52:58.419154 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-cni-bin-dir\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419329 kubelet[2290]: I0209 19:52:58.419320 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-flexvol-driver-host\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419402 kubelet[2290]: I0209 19:52:58.419395 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7c57abef-9ef6-4919-ba9f-099960e6f211-typha-certs\") pod \"calico-typha-64c6484466-8wzjv\" (UID: \"7c57abef-9ef6-4919-ba9f-099960e6f211\") " pod="calico-system/calico-typha-64c6484466-8wzjv" Feb 9 19:52:58.419515 kubelet[2290]: I0209 19:52:58.419501 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-cni-net-dir\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419561 kubelet[2290]: I0209 19:52:58.419523 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c57abef-9ef6-4919-ba9f-099960e6f211-tigera-ca-bundle\") pod \"calico-typha-64c6484466-8wzjv\" (UID: \"7c57abef-9ef6-4919-ba9f-099960e6f211\") " pod="calico-system/calico-typha-64c6484466-8wzjv" Feb 9 19:52:58.419561 kubelet[2290]: I0209 19:52:58.419548 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-lib-modules\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419603 kubelet[2290]: I0209 19:52:58.419563 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-xtables-lock\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419603 kubelet[2290]: I0209 19:52:58.419575 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bbcdb2b-337b-44aa-825c-b040fa39823e-tigera-ca-bundle\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419603 kubelet[2290]: I0209 19:52:58.419600 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczhh\" (UniqueName: \"kubernetes.io/projected/9bbcdb2b-337b-44aa-825c-b040fa39823e-kube-api-access-hczhh\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419667 kubelet[2290]: I0209 19:52:58.419617 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d67h\" (UniqueName: \"kubernetes.io/projected/7c57abef-9ef6-4919-ba9f-099960e6f211-kube-api-access-2d67h\") pod \"calico-typha-64c6484466-8wzjv\" (UID: \"7c57abef-9ef6-4919-ba9f-099960e6f211\") " pod="calico-system/calico-typha-64c6484466-8wzjv" Feb 9 19:52:58.419667 kubelet[2290]: I0209 19:52:58.419631 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-policysync\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419667 kubelet[2290]: I0209 19:52:58.419643 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9bbcdb2b-337b-44aa-825c-b040fa39823e-node-certs\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419667 kubelet[2290]: I0209 19:52:58.419655 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-var-run-calico\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419667 kubelet[2290]: I0209 19:52:58.419666 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-var-lib-calico\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.419765 kubelet[2290]: I0209 19:52:58.419679 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9bbcdb2b-337b-44aa-825c-b040fa39823e-cni-log-dir\") pod \"calico-node-r7jv2\" (UID: \"9bbcdb2b-337b-44aa-825c-b040fa39823e\") " pod="calico-system/calico-node-r7jv2" Feb 9 19:52:58.477789 kubelet[2290]: I0209 19:52:58.477754 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:52:58.477936 kubelet[2290]: E0209 19:52:58.477924 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:52:58.520188 kubelet[2290]: I0209 19:52:58.520166 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0895769c-c5fb-4668-b9c0-ae965d762f27-socket-dir\") pod \"csi-node-driver-7c8lr\" (UID: \"0895769c-c5fb-4668-b9c0-ae965d762f27\") " pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:52:58.520302 kubelet[2290]: I0209 19:52:58.520203 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0895769c-c5fb-4668-b9c0-ae965d762f27-registration-dir\") pod \"csi-node-driver-7c8lr\" (UID: \"0895769c-c5fb-4668-b9c0-ae965d762f27\") " pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:52:58.520302 kubelet[2290]: I0209 19:52:58.520235 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0895769c-c5fb-4668-b9c0-ae965d762f27-kubelet-dir\") pod \"csi-node-driver-7c8lr\" (UID: \"0895769c-c5fb-4668-b9c0-ae965d762f27\") " pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:52:58.520302 kubelet[2290]: I0209 19:52:58.520254 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0895769c-c5fb-4668-b9c0-ae965d762f27-varrun\") pod \"csi-node-driver-7c8lr\" (UID: \"0895769c-c5fb-4668-b9c0-ae965d762f27\") " pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:52:58.520302 kubelet[2290]: I0209 19:52:58.520267 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt28\" (UniqueName: \"kubernetes.io/projected/0895769c-c5fb-4668-b9c0-ae965d762f27-kube-api-access-tqt28\") pod \"csi-node-driver-7c8lr\" (UID: \"0895769c-c5fb-4668-b9c0-ae965d762f27\") " pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:52:58.522376 kubelet[2290]: E0209 19:52:58.522359 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.522449 kubelet[2290]: W0209 19:52:58.522440 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.522518 kubelet[2290]: E0209 19:52:58.522510 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.522665 kubelet[2290]: E0209 19:52:58.522659 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.524840 kubelet[2290]: W0209 19:52:58.524780 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.524972 kubelet[2290]: E0209 19:52:58.524966 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.525509 kubelet[2290]: E0209 19:52:58.525499 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.525509 kubelet[2290]: W0209 19:52:58.525506 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.525588 kubelet[2290]: E0209 19:52:58.525518 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.525680 kubelet[2290]: E0209 19:52:58.525671 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.525680 kubelet[2290]: W0209 19:52:58.525677 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.525741 kubelet[2290]: E0209 19:52:58.525684 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.525832 kubelet[2290]: E0209 19:52:58.525823 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.525832 kubelet[2290]: W0209 19:52:58.525829 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.525906 kubelet[2290]: E0209 19:52:58.525898 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.526018 kubelet[2290]: E0209 19:52:58.526008 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.526018 kubelet[2290]: W0209 19:52:58.526014 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.526072 kubelet[2290]: E0209 19:52:58.526051 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.526621 kubelet[2290]: E0209 19:52:58.526608 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.526621 kubelet[2290]: W0209 19:52:58.526615 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.526680 kubelet[2290]: E0209 19:52:58.526657 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.526769 kubelet[2290]: E0209 19:52:58.526761 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.526769 kubelet[2290]: W0209 19:52:58.526767 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.526832 kubelet[2290]: E0209 19:52:58.526805 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.527360 kubelet[2290]: E0209 19:52:58.527351 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.527360 kubelet[2290]: W0209 19:52:58.527358 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.527432 kubelet[2290]: E0209 19:52:58.527401 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.527612 kubelet[2290]: E0209 19:52:58.527596 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.527612 kubelet[2290]: W0209 19:52:58.527603 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.527695 kubelet[2290]: E0209 19:52:58.527643 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.527809 kubelet[2290]: E0209 19:52:58.527786 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.527809 kubelet[2290]: W0209 19:52:58.527792 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.527874 kubelet[2290]: E0209 19:52:58.527842 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.527992 kubelet[2290]: E0209 19:52:58.527981 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.527992 kubelet[2290]: W0209 19:52:58.527986 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.528071 kubelet[2290]: E0209 19:52:58.528026 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.528178 kubelet[2290]: E0209 19:52:58.528162 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.528178 kubelet[2290]: W0209 19:52:58.528169 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.528235 kubelet[2290]: E0209 19:52:58.528208 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.528582 kubelet[2290]: E0209 19:52:58.528574 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.528582 kubelet[2290]: W0209 19:52:58.528580 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.528668 kubelet[2290]: E0209 19:52:58.528619 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.529048 kubelet[2290]: E0209 19:52:58.529040 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.529048 kubelet[2290]: W0209 19:52:58.529046 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.529109 kubelet[2290]: E0209 19:52:58.529082 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.529201 kubelet[2290]: E0209 19:52:58.529193 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.529201 kubelet[2290]: W0209 19:52:58.529199 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.529266 kubelet[2290]: E0209 19:52:58.529253 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.529351 kubelet[2290]: E0209 19:52:58.529343 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.529351 kubelet[2290]: W0209 19:52:58.529348 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.529632 kubelet[2290]: E0209 19:52:58.529388 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.529632 kubelet[2290]: E0209 19:52:58.529478 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.529632 kubelet[2290]: W0209 19:52:58.529482 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.529632 kubelet[2290]: E0209 19:52:58.529518 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.529632 kubelet[2290]: E0209 19:52:58.529597 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.529632 kubelet[2290]: W0209 19:52:58.529602 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.529779 kubelet[2290]: E0209 19:52:58.529767 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.530791 kubelet[2290]: E0209 19:52:58.530782 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.530791 kubelet[2290]: W0209 19:52:58.530789 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.530882 kubelet[2290]: E0209 19:52:58.530829 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.530979 kubelet[2290]: E0209 19:52:58.530970 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.530979 kubelet[2290]: W0209 19:52:58.530975 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.531038 kubelet[2290]: E0209 19:52:58.531019 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.531202 kubelet[2290]: E0209 19:52:58.531193 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.531202 kubelet[2290]: W0209 19:52:58.531199 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.531274 kubelet[2290]: E0209 19:52:58.531235 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.531573 kubelet[2290]: E0209 19:52:58.531563 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.531573 kubelet[2290]: W0209 19:52:58.531570 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.531652 kubelet[2290]: E0209 19:52:58.531644 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.531751 kubelet[2290]: E0209 19:52:58.531744 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.531751 kubelet[2290]: W0209 19:52:58.531750 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.531806 kubelet[2290]: E0209 19:52:58.531788 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.533607 kubelet[2290]: E0209 19:52:58.533597 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.533607 kubelet[2290]: W0209 19:52:58.533604 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.533669 kubelet[2290]: E0209 19:52:58.533651 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.533777 kubelet[2290]: E0209 19:52:58.533767 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.533777 kubelet[2290]: W0209 19:52:58.533773 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.533836 kubelet[2290]: E0209 19:52:58.533815 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.533952 kubelet[2290]: E0209 19:52:58.533944 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.533952 kubelet[2290]: W0209 19:52:58.533951 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.534009 kubelet[2290]: E0209 19:52:58.533991 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.534102 kubelet[2290]: E0209 19:52:58.534094 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.534102 kubelet[2290]: W0209 19:52:58.534100 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.534173 kubelet[2290]: E0209 19:52:58.534137 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.534274 kubelet[2290]: E0209 19:52:58.534264 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.534274 kubelet[2290]: W0209 19:52:58.534271 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.534346 kubelet[2290]: E0209 19:52:58.534330 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.534449 kubelet[2290]: E0209 19:52:58.534440 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.534449 kubelet[2290]: W0209 19:52:58.534446 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.534504 kubelet[2290]: E0209 19:52:58.534483 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.534702 kubelet[2290]: E0209 19:52:58.534692 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.534702 kubelet[2290]: W0209 19:52:58.534698 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.534764 kubelet[2290]: E0209 19:52:58.534736 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.535271 kubelet[2290]: E0209 19:52:58.535261 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.535271 kubelet[2290]: W0209 19:52:58.535269 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.535328 kubelet[2290]: E0209 19:52:58.535315 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.535436 kubelet[2290]: E0209 19:52:58.535425 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.535436 kubelet[2290]: W0209 19:52:58.535432 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.535889 kubelet[2290]: E0209 19:52:58.535880 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.536018 kubelet[2290]: E0209 19:52:58.536004 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.536056 kubelet[2290]: W0209 19:52:58.536019 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.536082 kubelet[2290]: E0209 19:52:58.536058 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.536175 kubelet[2290]: E0209 19:52:58.536166 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.536175 kubelet[2290]: W0209 19:52:58.536173 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.536270 kubelet[2290]: E0209 19:52:58.536263 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.536379 kubelet[2290]: E0209 19:52:58.536364 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.536379 kubelet[2290]: W0209 19:52:58.536376 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.536431 kubelet[2290]: E0209 19:52:58.536414 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.536674 kubelet[2290]: E0209 19:52:58.536666 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.536674 kubelet[2290]: W0209 19:52:58.536672 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.536731 kubelet[2290]: E0209 19:52:58.536710 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.536818 kubelet[2290]: E0209 19:52:58.536810 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.536818 kubelet[2290]: W0209 19:52:58.536816 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.536874 kubelet[2290]: E0209 19:52:58.536864 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.537272 kubelet[2290]: E0209 19:52:58.537263 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.537272 kubelet[2290]: W0209 19:52:58.537270 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.537341 kubelet[2290]: E0209 19:52:58.537333 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.537571 kubelet[2290]: E0209 19:52:58.537564 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.537571 kubelet[2290]: W0209 19:52:58.537570 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.537629 kubelet[2290]: E0209 19:52:58.537612 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.538598 kubelet[2290]: E0209 19:52:58.538588 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.538598 kubelet[2290]: W0209 19:52:58.538595 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.538663 kubelet[2290]: E0209 19:52:58.538639 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.539028 kubelet[2290]: E0209 19:52:58.539019 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.539028 kubelet[2290]: W0209 19:52:58.539026 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.539114 kubelet[2290]: E0209 19:52:58.539107 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.539374 kubelet[2290]: E0209 19:52:58.539365 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.539374 kubelet[2290]: W0209 19:52:58.539371 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.539436 kubelet[2290]: E0209 19:52:58.539409 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.541402 kubelet[2290]: E0209 19:52:58.541390 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.541402 kubelet[2290]: W0209 19:52:58.541398 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.541515 kubelet[2290]: E0209 19:52:58.541476 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.541637 kubelet[2290]: E0209 19:52:58.541627 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.541637 kubelet[2290]: W0209 19:52:58.541634 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.541701 kubelet[2290]: E0209 19:52:58.541688 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.541761 kubelet[2290]: E0209 19:52:58.541751 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.541761 kubelet[2290]: W0209 19:52:58.541757 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.541779 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.542605 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.544980 kubelet[2290]: W0209 19:52:58.542615 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.542683 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.544980 kubelet[2290]: W0209 19:52:58.542687 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.542748 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.544980 kubelet[2290]: W0209 19:52:58.542752 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.542758 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.542787 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.544980 kubelet[2290]: E0209 19:52:58.542796 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.546742 kubelet[2290]: E0209 19:52:58.546729 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.546800 kubelet[2290]: W0209 19:52:58.546738 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.546800 kubelet[2290]: E0209 19:52:58.546760 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621140 kubelet[2290]: E0209 19:52:58.621118 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621140 kubelet[2290]: W0209 19:52:58.621130 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621140 kubelet[2290]: E0209 19:52:58.621144 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621303 kubelet[2290]: E0209 19:52:58.621232 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621303 kubelet[2290]: W0209 19:52:58.621236 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621303 kubelet[2290]: E0209 19:52:58.621242 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621366 kubelet[2290]: E0209 19:52:58.621309 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621366 kubelet[2290]: W0209 19:52:58.621313 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621366 kubelet[2290]: E0209 19:52:58.621318 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621420 kubelet[2290]: E0209 19:52:58.621395 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621420 kubelet[2290]: W0209 19:52:58.621399 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621420 kubelet[2290]: E0209 19:52:58.621404 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621497 kubelet[2290]: E0209 19:52:58.621488 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621497 kubelet[2290]: W0209 19:52:58.621494 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621563 kubelet[2290]: E0209 19:52:58.621500 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621603 kubelet[2290]: E0209 19:52:58.621595 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621635 kubelet[2290]: W0209 19:52:58.621603 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621635 kubelet[2290]: E0209 19:52:58.621616 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621699 kubelet[2290]: E0209 19:52:58.621691 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621699 kubelet[2290]: W0209 19:52:58.621697 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621749 kubelet[2290]: E0209 19:52:58.621704 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621799 kubelet[2290]: E0209 19:52:58.621790 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621799 kubelet[2290]: W0209 19:52:58.621796 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621858 kubelet[2290]: E0209 19:52:58.621804 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621880 kubelet[2290]: E0209 19:52:58.621868 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621880 kubelet[2290]: W0209 19:52:58.621872 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.621880 kubelet[2290]: E0209 19:52:58.621877 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.621944 kubelet[2290]: E0209 19:52:58.621936 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.621944 kubelet[2290]: W0209 19:52:58.621942 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622001 kubelet[2290]: E0209 19:52:58.621947 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622028 kubelet[2290]: E0209 19:52:58.622025 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622050 kubelet[2290]: W0209 19:52:58.622029 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622050 kubelet[2290]: E0209 19:52:58.622035 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622145 kubelet[2290]: E0209 19:52:58.622137 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622145 kubelet[2290]: W0209 19:52:58.622142 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622206 kubelet[2290]: E0209 19:52:58.622149 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622225 kubelet[2290]: E0209 19:52:58.622216 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622225 kubelet[2290]: W0209 19:52:58.622220 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622225 kubelet[2290]: E0209 19:52:58.622226 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622306 kubelet[2290]: E0209 19:52:58.622298 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622306 kubelet[2290]: W0209 19:52:58.622304 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622364 kubelet[2290]: E0209 19:52:58.622311 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622404 kubelet[2290]: E0209 19:52:58.622396 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622404 kubelet[2290]: W0209 19:52:58.622402 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622454 kubelet[2290]: E0209 19:52:58.622407 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622493 kubelet[2290]: E0209 19:52:58.622484 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622493 kubelet[2290]: W0209 19:52:58.622490 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622562 kubelet[2290]: E0209 19:52:58.622498 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.622586 kubelet[2290]: E0209 19:52:58.622569 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.622586 kubelet[2290]: W0209 19:52:58.622573 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.622586 kubelet[2290]: E0209 19:52:58.622578 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622638 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625140 kubelet[2290]: W0209 19:52:58.622651 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622660 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622744 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625140 kubelet[2290]: W0209 19:52:58.622748 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622753 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622873 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625140 kubelet[2290]: W0209 19:52:58.622877 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622885 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625140 kubelet[2290]: E0209 19:52:58.622980 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625320 kubelet[2290]: W0209 19:52:58.622984 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625320 kubelet[2290]: E0209 19:52:58.623021 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625320 kubelet[2290]: E0209 19:52:58.623067 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625320 kubelet[2290]: W0209 19:52:58.623071 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625320 kubelet[2290]: E0209 19:52:58.623080 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625320 kubelet[2290]: E0209 19:52:58.623172 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625320 kubelet[2290]: W0209 19:52:58.623177 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625320 kubelet[2290]: E0209 19:52:58.623184 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625320 kubelet[2290]: E0209 19:52:58.623265 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625320 kubelet[2290]: W0209 19:52:58.623270 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625496 kubelet[2290]: E0209 19:52:58.623280 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625496 kubelet[2290]: E0209 19:52:58.623431 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625496 kubelet[2290]: W0209 19:52:58.623435 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625496 kubelet[2290]: E0209 19:52:58.623440 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.625496 kubelet[2290]: E0209 19:52:58.623525 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.625496 kubelet[2290]: W0209 19:52:58.623530 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.625496 kubelet[2290]: E0209 19:52:58.623542 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.630045 kubelet[2290]: E0209 19:52:58.630033 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.630045 kubelet[2290]: W0209 19:52:58.630042 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.630148 kubelet[2290]: E0209 19:52:58.630051 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.722560 kubelet[2290]: E0209 19:52:58.722526 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.722560 kubelet[2290]: W0209 19:52:58.722545 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.722560 kubelet[2290]: E0209 19:52:58.722563 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.722762 kubelet[2290]: E0209 19:52:58.722751 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.722800 kubelet[2290]: W0209 19:52:58.722765 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.722800 kubelet[2290]: E0209 19:52:58.722773 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.722907 kubelet[2290]: E0209 19:52:58.722896 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.722907 kubelet[2290]: W0209 19:52:58.722903 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.722958 kubelet[2290]: E0209 19:52:58.722909 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.739264 kubelet[2290]: E0209 19:52:58.739244 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.739264 kubelet[2290]: W0209 19:52:58.739258 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.739363 kubelet[2290]: E0209 19:52:58.739271 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.823955 kubelet[2290]: E0209 19:52:58.823895 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.823955 kubelet[2290]: W0209 19:52:58.823906 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.823955 kubelet[2290]: E0209 19:52:58.823919 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.824080 kubelet[2290]: E0209 19:52:58.824002 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.824080 kubelet[2290]: W0209 19:52:58.824007 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.824080 kubelet[2290]: E0209 19:52:58.824013 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.924691 kubelet[2290]: E0209 19:52:58.924665 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.924691 kubelet[2290]: W0209 19:52:58.924684 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.924842 kubelet[2290]: E0209 19:52:58.924700 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.924979 kubelet[2290]: E0209 19:52:58.924965 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.924979 kubelet[2290]: W0209 19:52:58.924973 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.925042 kubelet[2290]: E0209 19:52:58.924981 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.940873 kubelet[2290]: E0209 19:52:58.940851 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:58.941018 kubelet[2290]: W0209 19:52:58.941006 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:58.941136 kubelet[2290]: E0209 19:52:58.941118 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:58.973798 env[1235]: time="2024-02-09T19:52:58.973771807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r7jv2,Uid:9bbcdb2b-337b-44aa-825c-b040fa39823e,Namespace:calico-system,Attempt:0,}" Feb 9 19:52:59.019957 env[1235]: time="2024-02-09T19:52:59.019905394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:59.020068 env[1235]: time="2024-02-09T19:52:59.019943252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:59.020068 env[1235]: time="2024-02-09T19:52:59.019952248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:59.022680 env[1235]: time="2024-02-09T19:52:59.020198506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720 pid=2800 runtime=io.containerd.runc.v2 Feb 9 19:52:59.025526 kubelet[2290]: E0209 19:52:59.025510 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:59.025526 kubelet[2290]: W0209 19:52:59.025522 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:59.025614 kubelet[2290]: E0209 19:52:59.025554 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:59.059478 env[1235]: time="2024-02-09T19:52:59.059445480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r7jv2,Uid:9bbcdb2b-337b-44aa-825c-b040fa39823e,Namespace:calico-system,Attempt:0,} returns sandbox id \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\"" Feb 9 19:52:59.061219 env[1235]: time="2024-02-09T19:52:59.061202115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:52:59.126934 kubelet[2290]: E0209 19:52:59.126558 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:59.126934 kubelet[2290]: W0209 19:52:59.126572 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:59.126934 kubelet[2290]: E0209 19:52:59.126585 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:59.137002 kubelet[2290]: E0209 19:52:59.136986 2290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:52:59.137002 kubelet[2290]: W0209 19:52:59.136997 2290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:52:59.137100 kubelet[2290]: E0209 19:52:59.137009 2290 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:52:59.233399 env[1235]: time="2024-02-09T19:52:59.233371352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c6484466-8wzjv,Uid:7c57abef-9ef6-4919-ba9f-099960e6f211,Namespace:calico-system,Attempt:0,}" Feb 9 19:52:59.240004 env[1235]: time="2024-02-09T19:52:59.239886876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:59.240004 env[1235]: time="2024-02-09T19:52:59.239910034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:59.240004 env[1235]: time="2024-02-09T19:52:59.239921687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:59.240111 env[1235]: time="2024-02-09T19:52:59.240018782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dad0f4f2ad83d9d620d6775a7d08ec36c1ccd2072094c7f50863883904fce7e3 pid=2845 runtime=io.containerd.runc.v2 Feb 9 19:52:59.282321 env[1235]: time="2024-02-09T19:52:59.282292732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c6484466-8wzjv,Uid:7c57abef-9ef6-4919-ba9f-099960e6f211,Namespace:calico-system,Attempt:0,} returns sandbox id \"dad0f4f2ad83d9d620d6775a7d08ec36c1ccd2072094c7f50863883904fce7e3\"" Feb 9 19:52:59.313000 audit[2904]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:59.313000 audit[2904]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd71e6b500 a2=0 a3=7ffd71e6b4ec items=0 ppid=2445 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:59.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:59.313000 audit[2904]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:52:59.313000 audit[2904]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd71e6b500 a2=0 a3=7ffd71e6b4ec items=0 ppid=2445 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:52:59.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:52:59.762147 kubelet[2290]: E0209 19:52:59.762100 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:01.762276 kubelet[2290]: E0209 19:53:01.762061 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:01.996430 env[1235]: time="2024-02-09T19:53:01.996403137Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:01.997377 env[1235]: time="2024-02-09T19:53:01.997360099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:01.999019 env[1235]: time="2024-02-09T19:53:01.999005478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:02.000667 env[1235]: time="2024-02-09T19:53:02.000649894Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:02.001360 env[1235]: time="2024-02-09T19:53:02.001346688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:53:02.002926 env[1235]: time="2024-02-09T19:53:02.002553851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:53:02.004016 env[1235]: time="2024-02-09T19:53:02.003996220Z" level=info msg="CreateContainer within sandbox \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:53:02.009865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149566009.mount: Deactivated successfully. Feb 9 19:53:02.011808 env[1235]: time="2024-02-09T19:53:02.011785212Z" level=info msg="CreateContainer within sandbox \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80\"" Feb 9 19:53:02.013763 env[1235]: time="2024-02-09T19:53:02.013126184Z" level=info msg="StartContainer for \"b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80\"" Feb 9 19:53:02.058360 env[1235]: time="2024-02-09T19:53:02.058330582Z" level=info msg="StartContainer for \"b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80\" returns successfully" Feb 9 19:53:02.372881 env[1235]: time="2024-02-09T19:53:02.372805283Z" level=info msg="shim disconnected" id=b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80 Feb 9 19:53:02.373051 env[1235]: time="2024-02-09T19:53:02.373029858Z" level=warning msg="cleaning up after shim disconnected" id=b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80 namespace=k8s.io Feb 9 19:53:02.373122 env[1235]: time="2024-02-09T19:53:02.373107755Z" level=info msg="cleaning up dead shim" Feb 9 19:53:02.379417 env[1235]: time="2024-02-09T19:53:02.379398301Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:53:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2952 runtime=io.containerd.runc.v2\n" Feb 9 19:53:03.008630 systemd[1]: run-containerd-runc-k8s.io-b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80-runc.vMyG0e.mount: Deactivated successfully. Feb 9 19:53:03.008748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b55e71a4a5017d021adedb3259ada2048b48744c47aa157cc7d85cb6cec72d80-rootfs.mount: Deactivated successfully. Feb 9 19:53:03.455512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974246389.mount: Deactivated successfully. Feb 9 19:53:03.762411 kubelet[2290]: E0209 19:53:03.762390 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:04.739255 env[1235]: time="2024-02-09T19:53:04.739225584Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:04.740050 env[1235]: time="2024-02-09T19:53:04.740036763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:04.741201 env[1235]: time="2024-02-09T19:53:04.741188181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:04.742287 env[1235]: time="2024-02-09T19:53:04.742268358Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:04.743298 env[1235]: time="2024-02-09T19:53:04.743278439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 19:53:04.743810 env[1235]: time="2024-02-09T19:53:04.743798061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:53:04.761497 env[1235]: time="2024-02-09T19:53:04.761469779Z" level=info msg="CreateContainer within sandbox \"dad0f4f2ad83d9d620d6775a7d08ec36c1ccd2072094c7f50863883904fce7e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:53:04.785212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319908873.mount: Deactivated successfully. Feb 9 19:53:04.846115 env[1235]: time="2024-02-09T19:53:04.846087995Z" level=info msg="CreateContainer within sandbox \"dad0f4f2ad83d9d620d6775a7d08ec36c1ccd2072094c7f50863883904fce7e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"830a08344c8dad913ed1828ae135a45af5fda47ffe77d615a495798b44b0507a\"" Feb 9 19:53:04.847367 env[1235]: time="2024-02-09T19:53:04.847347129Z" level=info msg="StartContainer for \"830a08344c8dad913ed1828ae135a45af5fda47ffe77d615a495798b44b0507a\"" Feb 9 19:53:04.907124 env[1235]: time="2024-02-09T19:53:04.907082571Z" level=info msg="StartContainer for \"830a08344c8dad913ed1828ae135a45af5fda47ffe77d615a495798b44b0507a\" returns successfully" Feb 9 19:53:05.762957 kubelet[2290]: E0209 19:53:05.762940 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:05.818138 kubelet[2290]: I0209 19:53:05.818117 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-64c6484466-8wzjv" podStartSLOduration=-9.223372029036686e+09 pod.CreationTimestamp="2024-02-09 19:52:58 +0000 UTC" firstStartedPulling="2024-02-09 19:52:59.283062097 +0000 UTC m=+22.688427881" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:05.816755268 +0000 UTC m=+29.222121077" watchObservedRunningTime="2024-02-09 19:53:05.818090677 +0000 UTC m=+29.223456476" Feb 9 19:53:05.854090 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:53:05.854170 kernel: audit: type=1325 audit(1707508385.850:280): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:05.850000 audit[3040]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:05.858437 kernel: audit: type=1300 audit(1707508385.850:280): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe43568520 a2=0 a3=7ffe4356850c items=0 ppid=2445 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:05.850000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe43568520 a2=0 a3=7ffe4356850c items=0 ppid=2445 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:05.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:05.861549 kernel: audit: type=1327 audit(1707508385.850:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:05.858000 audit[3040]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:05.858000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffe43568520 a2=0 a3=7ffe4356850c items=0 ppid=2445 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:05.867518 kernel: audit: type=1325 audit(1707508385.858:281): table=nat:110 family=2 entries=27 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:05.867591 kernel: audit: type=1300 audit(1707508385.858:281): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffe43568520 a2=0 a3=7ffe4356850c items=0 ppid=2445 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:05.867618 kernel: audit: type=1327 audit(1707508385.858:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:05.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:06.163914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975667957.mount: Deactivated successfully. Feb 9 19:53:07.762247 kubelet[2290]: E0209 19:53:07.762045 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:08.731513 env[1235]: time="2024-02-09T19:53:08.731464364Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:08.733714 env[1235]: time="2024-02-09T19:53:08.733698246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:08.735062 env[1235]: time="2024-02-09T19:53:08.735047198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:08.736200 env[1235]: time="2024-02-09T19:53:08.736189057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:08.736930 env[1235]: time="2024-02-09T19:53:08.736906864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:53:08.739812 env[1235]: time="2024-02-09T19:53:08.739789763Z" level=info msg="CreateContainer within sandbox \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:53:08.746189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032696301.mount: Deactivated successfully. Feb 9 19:53:08.748741 env[1235]: time="2024-02-09T19:53:08.748709170Z" level=info msg="CreateContainer within sandbox \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f81d6cfb13f52d156dce671a7c0eb57caec7bf86a384e87c8cb906cb71bf3918\"" Feb 9 19:53:08.749249 env[1235]: time="2024-02-09T19:53:08.749235624Z" level=info msg="StartContainer for \"f81d6cfb13f52d156dce671a7c0eb57caec7bf86a384e87c8cb906cb71bf3918\"" Feb 9 19:53:08.817137 env[1235]: time="2024-02-09T19:53:08.816751005Z" level=info msg="StartContainer for \"f81d6cfb13f52d156dce671a7c0eb57caec7bf86a384e87c8cb906cb71bf3918\" returns successfully" Feb 9 19:53:09.762588 kubelet[2290]: E0209 19:53:09.762566 2290 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:10.913291 env[1235]: time="2024-02-09T19:53:10.913238158Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:53:10.927426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f81d6cfb13f52d156dce671a7c0eb57caec7bf86a384e87c8cb906cb71bf3918-rootfs.mount: Deactivated successfully. Feb 9 19:53:10.930134 env[1235]: time="2024-02-09T19:53:10.930109265Z" level=info msg="shim disconnected" id=f81d6cfb13f52d156dce671a7c0eb57caec7bf86a384e87c8cb906cb71bf3918 Feb 9 19:53:10.930216 env[1235]: time="2024-02-09T19:53:10.930201146Z" level=warning msg="cleaning up after shim disconnected" id=f81d6cfb13f52d156dce671a7c0eb57caec7bf86a384e87c8cb906cb71bf3918 namespace=k8s.io Feb 9 19:53:10.930360 env[1235]: time="2024-02-09T19:53:10.930350206Z" level=info msg="cleaning up dead shim" Feb 9 19:53:10.936215 env[1235]: time="2024-02-09T19:53:10.936189982Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:53:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3095 runtime=io.containerd.runc.v2\n" Feb 9 19:53:10.952560 kubelet[2290]: I0209 19:53:10.952163 2290 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:53:10.977764 kubelet[2290]: I0209 19:53:10.977556 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:53:10.983254 kubelet[2290]: I0209 19:53:10.983138 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:53:10.983254 kubelet[2290]: I0209 19:53:10.983227 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:53:11.100282 kubelet[2290]: I0209 19:53:11.100261 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd916192-50ee-4b61-9539-fadb37ab3765-config-volume\") pod \"coredns-787d4945fb-9q2kb\" (UID: \"fd916192-50ee-4b61-9539-fadb37ab3765\") " pod="kube-system/coredns-787d4945fb-9q2kb" Feb 9 19:53:11.100521 kubelet[2290]: I0209 19:53:11.100511 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9468b16-76c1-4bea-9723-d2d33c9557c7-tigera-ca-bundle\") pod \"calico-kube-controllers-f9496d88-gm4g9\" (UID: \"f9468b16-76c1-4bea-9723-d2d33c9557c7\") " pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" Feb 9 19:53:11.100631 kubelet[2290]: I0209 19:53:11.100625 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1251f47a-f8b9-441f-b4c8-207de9468685-config-volume\") pod \"coredns-787d4945fb-b9gw4\" (UID: \"1251f47a-f8b9-441f-b4c8-207de9468685\") " pod="kube-system/coredns-787d4945fb-b9gw4" Feb 9 19:53:11.100716 kubelet[2290]: I0209 19:53:11.100709 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5b5\" (UniqueName: \"kubernetes.io/projected/f9468b16-76c1-4bea-9723-d2d33c9557c7-kube-api-access-pt5b5\") pod \"calico-kube-controllers-f9496d88-gm4g9\" (UID: \"f9468b16-76c1-4bea-9723-d2d33c9557c7\") " pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" Feb 9 19:53:11.100803 kubelet[2290]: I0209 19:53:11.100797 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxbs\" (UniqueName: \"kubernetes.io/projected/1251f47a-f8b9-441f-b4c8-207de9468685-kube-api-access-zdxbs\") pod \"coredns-787d4945fb-b9gw4\" (UID: \"1251f47a-f8b9-441f-b4c8-207de9468685\") " pod="kube-system/coredns-787d4945fb-b9gw4" Feb 9 19:53:11.100890 kubelet[2290]: I0209 19:53:11.100883 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qr57\" (UniqueName: \"kubernetes.io/projected/fd916192-50ee-4b61-9539-fadb37ab3765-kube-api-access-6qr57\") pod \"coredns-787d4945fb-9q2kb\" (UID: \"fd916192-50ee-4b61-9539-fadb37ab3765\") " pod="kube-system/coredns-787d4945fb-9q2kb" Feb 9 19:53:11.302549 env[1235]: time="2024-02-09T19:53:11.302511332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-9q2kb,Uid:fd916192-50ee-4b61-9539-fadb37ab3765,Namespace:kube-system,Attempt:0,}" Feb 9 19:53:11.304063 env[1235]: time="2024-02-09T19:53:11.303954664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f9496d88-gm4g9,Uid:f9468b16-76c1-4bea-9723-d2d33c9557c7,Namespace:calico-system,Attempt:0,}" Feb 9 19:53:11.304283 env[1235]: time="2024-02-09T19:53:11.304264753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b9gw4,Uid:1251f47a-f8b9-441f-b4c8-207de9468685,Namespace:kube-system,Attempt:0,}" Feb 9 19:53:11.395428 env[1235]: time="2024-02-09T19:53:11.395381650Z" level=error msg="Failed to destroy network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.395765 env[1235]: time="2024-02-09T19:53:11.395747850Z" level=error msg="encountered an error cleaning up failed sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.395861 env[1235]: time="2024-02-09T19:53:11.395844397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b9gw4,Uid:1251f47a-f8b9-441f-b4c8-207de9468685,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.396081 kubelet[2290]: E0209 19:53:11.396061 2290 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.397026 kubelet[2290]: E0209 19:53:11.397012 2290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-b9gw4" Feb 9 19:53:11.397068 kubelet[2290]: E0209 19:53:11.397035 2290 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-b9gw4" Feb 9 19:53:11.397490 kubelet[2290]: E0209 19:53:11.397469 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-b9gw4_kube-system(1251f47a-f8b9-441f-b4c8-207de9468685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-b9gw4_kube-system(1251f47a-f8b9-441f-b4c8-207de9468685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-b9gw4" podUID=1251f47a-f8b9-441f-b4c8-207de9468685 Feb 9 19:53:11.400558 env[1235]: time="2024-02-09T19:53:11.400514029Z" level=error msg="Failed to destroy network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.400855 env[1235]: time="2024-02-09T19:53:11.400838732Z" level=error msg="encountered an error cleaning up failed sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.400938 env[1235]: time="2024-02-09T19:53:11.400921948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f9496d88-gm4g9,Uid:f9468b16-76c1-4bea-9723-d2d33c9557c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.401124 kubelet[2290]: E0209 19:53:11.401109 2290 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.401175 kubelet[2290]: E0209 19:53:11.401141 2290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" Feb 9 19:53:11.401175 kubelet[2290]: E0209 19:53:11.401156 2290 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" Feb 9 19:53:11.401219 kubelet[2290]: E0209 19:53:11.401188 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f9496d88-gm4g9_calico-system(f9468b16-76c1-4bea-9723-d2d33c9557c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f9496d88-gm4g9_calico-system(f9468b16-76c1-4bea-9723-d2d33c9557c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" podUID=f9468b16-76c1-4bea-9723-d2d33c9557c7 Feb 9 19:53:11.403150 env[1235]: time="2024-02-09T19:53:11.403114646Z" level=error msg="Failed to destroy network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.403352 env[1235]: time="2024-02-09T19:53:11.403331936Z" level=error msg="encountered an error cleaning up failed sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.403396 env[1235]: time="2024-02-09T19:53:11.403361009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-9q2kb,Uid:fd916192-50ee-4b61-9539-fadb37ab3765,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.403459 kubelet[2290]: E0209 19:53:11.403446 2290 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.404276 kubelet[2290]: E0209 19:53:11.403470 2290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-9q2kb" Feb 9 19:53:11.404276 kubelet[2290]: E0209 19:53:11.403484 2290 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-9q2kb" Feb 9 19:53:11.404276 kubelet[2290]: E0209 19:53:11.403510 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-9q2kb_kube-system(fd916192-50ee-4b61-9539-fadb37ab3765)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-9q2kb_kube-system(fd916192-50ee-4b61-9539-fadb37ab3765)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-9q2kb" podUID=fd916192-50ee-4b61-9539-fadb37ab3765 Feb 9 19:53:11.765005 env[1235]: time="2024-02-09T19:53:11.764982628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c8lr,Uid:0895769c-c5fb-4668-b9c0-ae965d762f27,Namespace:calico-system,Attempt:0,}" Feb 9 19:53:11.799056 env[1235]: time="2024-02-09T19:53:11.799011749Z" level=error msg="Failed to destroy network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.799261 env[1235]: time="2024-02-09T19:53:11.799242091Z" level=error msg="encountered an error cleaning up failed sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.799298 env[1235]: time="2024-02-09T19:53:11.799275543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c8lr,Uid:0895769c-c5fb-4668-b9c0-ae965d762f27,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.799597 kubelet[2290]: E0209 19:53:11.799425 2290 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.799597 kubelet[2290]: E0209 19:53:11.799483 2290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:53:11.799597 kubelet[2290]: E0209 19:53:11.799499 2290 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c8lr" Feb 9 19:53:11.799682 kubelet[2290]: E0209 19:53:11.799542 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c8lr_calico-system(0895769c-c5fb-4668-b9c0-ae965d762f27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c8lr_calico-system(0895769c-c5fb-4668-b9c0-ae965d762f27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:11.824091 kubelet[2290]: I0209 19:53:11.824072 2290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:11.825058 kubelet[2290]: I0209 19:53:11.824921 2290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:11.831824 kubelet[2290]: I0209 19:53:11.830577 2290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:11.832623 env[1235]: time="2024-02-09T19:53:11.832604698Z" level=info msg="StopPodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\"" Feb 9 19:53:11.833129 env[1235]: time="2024-02-09T19:53:11.833053977Z" level=info msg="StopPodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\"" Feb 9 19:53:11.835800 env[1235]: time="2024-02-09T19:53:11.835775901Z" level=info msg="StopPodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\"" Feb 9 19:53:11.836310 kubelet[2290]: I0209 19:53:11.836284 2290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:11.837618 env[1235]: time="2024-02-09T19:53:11.836778466Z" level=info msg="StopPodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\"" Feb 9 19:53:11.851582 env[1235]: time="2024-02-09T19:53:11.849131279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:53:11.882901 env[1235]: time="2024-02-09T19:53:11.882863523Z" level=error msg="StopPodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" failed" error="failed to destroy network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.883346 kubelet[2290]: E0209 19:53:11.883170 2290 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:11.883346 kubelet[2290]: E0209 19:53:11.883210 2290 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60} Feb 9 19:53:11.883346 kubelet[2290]: E0209 19:53:11.883252 2290 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0895769c-c5fb-4668-b9c0-ae965d762f27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:53:11.883346 kubelet[2290]: E0209 19:53:11.883278 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0895769c-c5fb-4668-b9c0-ae965d762f27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c8lr" podUID=0895769c-c5fb-4668-b9c0-ae965d762f27 Feb 9 19:53:11.890505 env[1235]: time="2024-02-09T19:53:11.890471644Z" level=error msg="StopPodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" failed" error="failed to destroy network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.890862 kubelet[2290]: E0209 19:53:11.890762 2290 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:11.890862 kubelet[2290]: E0209 19:53:11.890788 2290 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083} Feb 9 19:53:11.890862 kubelet[2290]: E0209 19:53:11.890809 2290 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9468b16-76c1-4bea-9723-d2d33c9557c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:53:11.890862 kubelet[2290]: E0209 19:53:11.890838 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9468b16-76c1-4bea-9723-d2d33c9557c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" podUID=f9468b16-76c1-4bea-9723-d2d33c9557c7 Feb 9 19:53:11.902220 env[1235]: time="2024-02-09T19:53:11.902181569Z" level=error msg="StopPodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" failed" error="failed to destroy network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.902520 kubelet[2290]: E0209 19:53:11.902422 2290 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:11.902520 kubelet[2290]: E0209 19:53:11.902450 2290 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7} Feb 9 19:53:11.902520 kubelet[2290]: E0209 19:53:11.902476 2290 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1251f47a-f8b9-441f-b4c8-207de9468685\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:53:11.902520 kubelet[2290]: E0209 19:53:11.902504 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1251f47a-f8b9-441f-b4c8-207de9468685\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-b9gw4" podUID=1251f47a-f8b9-441f-b4c8-207de9468685 Feb 9 19:53:11.902743 env[1235]: time="2024-02-09T19:53:11.902724602Z" level=error msg="StopPodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" failed" error="failed to destroy network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:53:11.902949 kubelet[2290]: E0209 19:53:11.902861 2290 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:11.902949 kubelet[2290]: E0209 19:53:11.902899 2290 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b} Feb 9 19:53:11.902949 kubelet[2290]: E0209 19:53:11.902917 2290 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd916192-50ee-4b61-9539-fadb37ab3765\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:53:11.902949 kubelet[2290]: E0209 19:53:11.902933 2290 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd916192-50ee-4b61-9539-fadb37ab3765\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-9q2kb" podUID=fd916192-50ee-4b61-9539-fadb37ab3765 Feb 9 19:53:11.928160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7-shm.mount: Deactivated successfully. Feb 9 19:53:16.658358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467192015.mount: Deactivated successfully. Feb 9 19:53:16.757860 env[1235]: time="2024-02-09T19:53:16.757811662Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:16.760313 env[1235]: time="2024-02-09T19:53:16.760291588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:16.761256 env[1235]: time="2024-02-09T19:53:16.761241972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:16.762977 env[1235]: time="2024-02-09T19:53:16.762961519Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:16.763235 env[1235]: time="2024-02-09T19:53:16.763221898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:53:16.786106 env[1235]: time="2024-02-09T19:53:16.786084600Z" level=info msg="CreateContainer within sandbox \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:53:16.792951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326059922.mount: Deactivated successfully. Feb 9 19:53:16.796124 env[1235]: time="2024-02-09T19:53:16.796097530Z" level=info msg="CreateContainer within sandbox \"627b421c8f01bc10d20d77bdc2f91d954d2906b59d888367dc9afd75f1a7e720\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296\"" Feb 9 19:53:16.798129 env[1235]: time="2024-02-09T19:53:16.798099001Z" level=info msg="StartContainer for \"874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296\"" Feb 9 19:53:16.855559 env[1235]: time="2024-02-09T19:53:16.855516197Z" level=info msg="StartContainer for \"874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296\" returns successfully" Feb 9 19:53:16.919796 kubelet[2290]: I0209 19:53:16.919726 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-r7jv2" podStartSLOduration=-9.223372017941936e+09 pod.CreationTimestamp="2024-02-09 19:52:58 +0000 UTC" firstStartedPulling="2024-02-09 19:52:59.060049339 +0000 UTC m=+22.465415123" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:16.901098695 +0000 UTC m=+40.306464483" watchObservedRunningTime="2024-02-09 19:53:16.912839396 +0000 UTC m=+40.318205186" Feb 9 19:53:17.429498 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:53:17.431962 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:53:17.892165 systemd[1]: run-containerd-runc-k8s.io-874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296-runc.ra2wLO.mount: Deactivated successfully. Feb 9 19:53:18.796000 audit[3488]: AVC avc: denied { write } for pid=3488 comm="tee" name="fd" dev="proc" ino=36007 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.817299 kernel: audit: type=1400 audit(1707508398.796:282): avc: denied { write } for pid=3488 comm="tee" name="fd" dev="proc" ino=36007 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.822326 kernel: audit: type=1400 audit(1707508398.797:283): avc: denied { write } for pid=3474 comm="tee" name="fd" dev="proc" ino=35459 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.822374 kernel: audit: type=1300 audit(1707508398.797:283): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd79b297f a2=241 a3=1b6 items=1 ppid=3439 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.822391 kernel: audit: type=1307 audit(1707508398.797:283): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:53:18.822406 kernel: audit: type=1302 audit(1707508398.797:283): item=0 name="/dev/fd/63" inode=35983 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.822424 kernel: audit: type=1327 audit(1707508398.797:283): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.797000 audit[3474]: AVC avc: denied { write } for pid=3474 comm="tee" name="fd" dev="proc" ino=35459 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.797000 audit[3474]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd79b297f a2=241 a3=1b6 items=1 ppid=3439 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.861818 kernel: audit: type=1300 audit(1707508398.796:282): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7b0e6980 a2=241 a3=1b6 items=1 ppid=3431 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.862300 kernel: audit: type=1307 audit(1707508398.796:282): cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:53:18.862890 kernel: audit: type=1302 audit(1707508398.796:282): item=0 name="/dev/fd/63" inode=36001 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.862911 kernel: audit: type=1327 audit(1707508398.796:282): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.797000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:53:18.797000 audit: PATH item=0 name="/dev/fd/63" inode=35983 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.796000 audit[3488]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7b0e6980 a2=241 a3=1b6 items=1 ppid=3431 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.796000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:53:18.796000 audit: PATH item=0 name="/dev/fd/63" inode=36001 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.807000 audit[3486]: AVC avc: denied { write } for pid=3486 comm="tee" name="fd" dev="proc" ino=35465 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.807000 audit[3486]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd584e998f a2=241 a3=1b6 items=1 ppid=3432 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.807000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:53:18.807000 audit: PATH item=0 name="/dev/fd/63" inode=35450 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.807000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.808000 audit[3497]: AVC avc: denied { write } for pid=3497 comm="tee" name="fd" dev="proc" ino=35469 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.808000 audit[3497]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0f85e98f a2=241 a3=1b6 items=1 ppid=3452 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.808000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:53:18.808000 audit: PATH item=0 name="/dev/fd/63" inode=36004 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.816000 audit[3484]: AVC avc: denied { write } for pid=3484 comm="tee" name="fd" dev="proc" ino=35473 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.816000 audit[3484]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd81f2798f a2=241 a3=1b6 items=1 ppid=3435 pid=3484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.816000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:53:18.816000 audit: PATH item=0 name="/dev/fd/63" inode=35449 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.818000 audit[3492]: AVC avc: denied { write } for pid=3492 comm="tee" name="fd" dev="proc" ino=35478 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.818000 audit[3492]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc50974990 a2=241 a3=1b6 items=1 ppid=3442 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.818000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:53:18.818000 audit: PATH item=0 name="/dev/fd/63" inode=35451 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:18.822000 audit[3505]: AVC avc: denied { write } for pid=3505 comm="tee" name="fd" dev="proc" ino=35482 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:53:18.822000 audit[3505]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd775a2991 a2=241 a3=1b6 items=1 ppid=3441 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:18.822000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:53:18.822000 audit: PATH item=0 name="/dev/fd/63" inode=35475 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:53:18.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.096000 audit: BPF prog-id=10 op=LOAD Feb 9 19:53:19.096000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0c8b6620 a2=70 a3=7f977f129000 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.096000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.097000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit: BPF prog-id=11 op=LOAD Feb 9 19:53:19.097000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0c8b6620 a2=70 a3=6e items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.097000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc0c8b65d0 a2=70 a3=7ffc0c8b6620 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit: BPF prog-id=12 op=LOAD Feb 9 19:53:19.097000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0c8b65b0 a2=70 a3=7ffc0c8b6620 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.097000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0c8b6690 a2=70 a3=0 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0c8b6680 a2=70 a3=0 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.097000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.097000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc0c8b66c0 a2=70 a3=0 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { perfmon } for pid=3572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit[3572]: AVC avc: denied { bpf } for pid=3572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.100000 audit: BPF prog-id=13 op=LOAD Feb 9 19:53:19.100000 audit[3572]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0c8b65e0 a2=70 a3=ffffffff items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:53:19.104000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.104000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff3326b850 a2=70 a3=fff80800 items=0 ppid=3461 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:53:19.104000 audit[3576]: AVC avc: denied { bpf } for pid=3576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:53:19.104000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff3326b720 a2=70 a3=3 items=0 ppid=3461 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.104000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:53:19.110000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:53:19.168000 audit[3610]: NETFILTER_CFG table=nat:111 family=2 entries=16 op=nft_register_chain pid=3610 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:19.168000 audit[3610]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffc60394c90 a2=0 a3=7ffc60394c7c items=0 ppid=3461 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.168000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:19.169000 audit[3611]: NETFILTER_CFG table=mangle:112 family=2 entries=19 op=nft_register_chain pid=3611 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:19.169000 audit[3611]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7fff5d2d8f90 a2=0 a3=563d5ff74000 items=0 ppid=3461 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.169000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:19.174000 audit[3609]: NETFILTER_CFG table=raw:113 family=2 entries=19 op=nft_register_chain pid=3609 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:19.174000 audit[3609]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc84abf370 a2=0 a3=7ffc84abf35c items=0 ppid=3461 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.174000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:19.177000 audit[3614]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=3614 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:19.177000 audit[3614]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffe4bb02e90 a2=0 a3=55b3f04ff000 items=0 ppid=3461 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:19.177000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:20.003570 systemd-networkd[1111]: vxlan.calico: Link UP Feb 9 19:53:20.003583 systemd-networkd[1111]: vxlan.calico: Gained carrier Feb 9 19:53:21.261682 systemd-networkd[1111]: vxlan.calico: Gained IPv6LL Feb 9 19:53:22.763385 env[1235]: time="2024-02-09T19:53:22.763326092Z" level=info msg="StopPodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\"" Feb 9 19:53:22.764112 env[1235]: time="2024-02-09T19:53:22.764034243Z" level=info msg="StopPodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\"" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.134 [INFO][3655] k8s.go 578: Cleaning up netns ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.134 [INFO][3655] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" iface="eth0" netns="/var/run/netns/cni-b3b9cd4d-c6b9-70e7-ebee-a99122127045" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.135 [INFO][3655] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" iface="eth0" netns="/var/run/netns/cni-b3b9cd4d-c6b9-70e7-ebee-a99122127045" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.135 [INFO][3655] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" iface="eth0" netns="/var/run/netns/cni-b3b9cd4d-c6b9-70e7-ebee-a99122127045" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.135 [INFO][3655] k8s.go 585: Releasing IP address(es) ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.135 [INFO][3655] utils.go 188: Calico CNI releasing IP address ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.308 [INFO][3671] ipam_plugin.go 415: Releasing address using handleID ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.309 [INFO][3671] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.309 [INFO][3671] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.322 [WARNING][3671] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.322 [INFO][3671] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.323 [INFO][3671] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:23.332225 env[1235]: 2024-02-09 19:53:23.330 [INFO][3655] k8s.go 591: Teardown processing complete. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.130 [INFO][3656] k8s.go 578: Cleaning up netns ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.130 [INFO][3656] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" iface="eth0" netns="/var/run/netns/cni-ab1394df-8f61-1924-2697-768488a3b7a1" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.130 [INFO][3656] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" iface="eth0" netns="/var/run/netns/cni-ab1394df-8f61-1924-2697-768488a3b7a1" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.131 [INFO][3656] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" iface="eth0" netns="/var/run/netns/cni-ab1394df-8f61-1924-2697-768488a3b7a1" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.131 [INFO][3656] k8s.go 585: Releasing IP address(es) ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.131 [INFO][3656] utils.go 188: Calico CNI releasing IP address ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.307 [INFO][3670] ipam_plugin.go 415: Releasing address using handleID ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.309 [INFO][3670] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.323 [INFO][3670] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.327 [WARNING][3670] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.327 [INFO][3670] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.328 [INFO][3670] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:23.337357 env[1235]: 2024-02-09 19:53:23.330 [INFO][3656] k8s.go 591: Teardown processing complete. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:23.337357 env[1235]: time="2024-02-09T19:53:23.335492356Z" level=info msg="TearDown network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" successfully" Feb 9 19:53:23.337357 env[1235]: time="2024-02-09T19:53:23.335513867Z" level=info msg="StopPodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" returns successfully" Feb 9 19:53:23.337357 env[1235]: time="2024-02-09T19:53:23.335653341Z" level=info msg="TearDown network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" successfully" Feb 9 19:53:23.337357 env[1235]: time="2024-02-09T19:53:23.335664216Z" level=info msg="StopPodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" returns successfully" Feb 9 19:53:23.333631 systemd[1]: run-netns-cni\x2db3b9cd4d\x2dc6b9\x2d70e7\x2debee\x2da99122127045.mount: Deactivated successfully. Feb 9 19:53:23.338369 env[1235]: time="2024-02-09T19:53:23.337739946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f9496d88-gm4g9,Uid:f9468b16-76c1-4bea-9723-d2d33c9557c7,Namespace:calico-system,Attempt:1,}" Feb 9 19:53:23.338369 env[1235]: time="2024-02-09T19:53:23.338164120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c8lr,Uid:0895769c-c5fb-4668-b9c0-ae965d762f27,Namespace:calico-system,Attempt:1,}" Feb 9 19:53:23.335195 systemd[1]: run-netns-cni\x2dab1394df\x2d8f61\x2d1924\x2d2697\x2d768488a3b7a1.mount: Deactivated successfully. Feb 9 19:53:23.447421 systemd-networkd[1111]: cali0c1618ef35c: Link UP Feb 9 19:53:23.450153 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:53:23.450202 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0c1618ef35c: link becomes ready Feb 9 19:53:23.450056 systemd-networkd[1111]: cali0c1618ef35c: Gained carrier Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.377 [INFO][3683] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7c8lr-eth0 csi-node-driver- calico-system 0895769c-c5fb-4668-b9c0-ae965d762f27 678 0 2024-02-09 19:52:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-7c8lr eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0c1618ef35c [] []}} ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.378 [INFO][3683] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.411 [INFO][3705] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" HandleID="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.418 [INFO][3705] ipam_plugin.go 268: Auto assigning IP ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" HandleID="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060d3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7c8lr", "timestamp":"2024-02-09 19:53:23.409862539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.418 [INFO][3705] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.418 [INFO][3705] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.418 [INFO][3705] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.419 [INFO][3705] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.423 [INFO][3705] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.429 [INFO][3705] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.430 [INFO][3705] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.431 [INFO][3705] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.431 [INFO][3705] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.432 [INFO][3705] ipam.go 1682: Creating new handle: k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30 Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.434 [INFO][3705] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.438 [INFO][3705] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.438 [INFO][3705] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" host="localhost" Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.438 [INFO][3705] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:23.467268 env[1235]: 2024-02-09 19:53:23.438 [INFO][3705] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" HandleID="k8s-pod-network.20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.472190 env[1235]: 2024-02-09 19:53:23.439 [INFO][3683] k8s.go 385: Populated endpoint ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7c8lr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0895769c-c5fb-4668-b9c0-ae965d762f27", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7c8lr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c1618ef35c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:23.472190 env[1235]: 2024-02-09 19:53:23.439 [INFO][3683] k8s.go 386: Calico CNI using IPs: [192.168.88.129/32] ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.472190 env[1235]: 2024-02-09 19:53:23.439 [INFO][3683] dataplane_linux.go 68: Setting the host side veth name to cali0c1618ef35c ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.472190 env[1235]: 2024-02-09 19:53:23.450 [INFO][3683] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.472190 env[1235]: 2024-02-09 19:53:23.450 [INFO][3683] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7c8lr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0895769c-c5fb-4668-b9c0-ae965d762f27", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30", Pod:"csi-node-driver-7c8lr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c1618ef35c", MAC:"b6:52:7b:df:ec:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:23.472190 env[1235]: 2024-02-09 19:53:23.461 [INFO][3683] k8s.go 491: Wrote updated endpoint to datastore ContainerID="20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30" Namespace="calico-system" Pod="csi-node-driver-7c8lr" WorkloadEndpoint="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:23.499866 env[1235]: time="2024-02-09T19:53:23.499749742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:53:23.499866 env[1235]: time="2024-02-09T19:53:23.499798843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:53:23.499866 env[1235]: time="2024-02-09T19:53:23.499810060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:53:23.500047 env[1235]: time="2024-02-09T19:53:23.499972385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30 pid=3738 runtime=io.containerd.runc.v2 Feb 9 19:53:23.508000 audit[3754]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=3754 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:23.508000 audit[3754]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffe942aeac0 a2=0 a3=7ffe942aeaac items=0 ppid=3461 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:23.508000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:23.514421 systemd-networkd[1111]: cali904e6679f0b: Link UP Feb 9 19:53:23.515775 systemd-networkd[1111]: cali904e6679f0b: Gained carrier Feb 9 19:53:23.516551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali904e6679f0b: link becomes ready Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.405 [INFO][3688] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0 calico-kube-controllers-f9496d88- calico-system f9468b16-76c1-4bea-9723-d2d33c9557c7 677 0 2024-02-09 19:52:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f9496d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f9496d88-gm4g9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali904e6679f0b [] []}} ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.406 [INFO][3688] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.453 [INFO][3712] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" HandleID="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.468 [INFO][3712] ipam_plugin.go 268: Auto assigning IP ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" HandleID="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bede0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f9496d88-gm4g9", "timestamp":"2024-02-09 19:53:23.453977424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.468 [INFO][3712] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.469 [INFO][3712] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.469 [INFO][3712] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.470 [INFO][3712] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.488 [INFO][3712] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.494 [INFO][3712] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.495 [INFO][3712] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.499 [INFO][3712] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.500 [INFO][3712] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.502 [INFO][3712] ipam.go 1682: Creating new handle: k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07 Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.506 [INFO][3712] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.510 [INFO][3712] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.510 [INFO][3712] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" host="localhost" Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.510 [INFO][3712] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:23.529172 env[1235]: 2024-02-09 19:53:23.510 [INFO][3712] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" HandleID="k8s-pod-network.75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.529696 env[1235]: 2024-02-09 19:53:23.512 [INFO][3688] k8s.go 385: Populated endpoint ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0", GenerateName:"calico-kube-controllers-f9496d88-", Namespace:"calico-system", SelfLink:"", UID:"f9468b16-76c1-4bea-9723-d2d33c9557c7", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f9496d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f9496d88-gm4g9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali904e6679f0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:23.529696 env[1235]: 2024-02-09 19:53:23.512 [INFO][3688] k8s.go 386: Calico CNI using IPs: [192.168.88.130/32] ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.529696 env[1235]: 2024-02-09 19:53:23.512 [INFO][3688] dataplane_linux.go 68: Setting the host side veth name to cali904e6679f0b ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.529696 env[1235]: 2024-02-09 19:53:23.516 [INFO][3688] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.529696 env[1235]: 2024-02-09 19:53:23.520 [INFO][3688] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0", GenerateName:"calico-kube-controllers-f9496d88-", Namespace:"calico-system", SelfLink:"", UID:"f9468b16-76c1-4bea-9723-d2d33c9557c7", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f9496d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07", Pod:"calico-kube-controllers-f9496d88-gm4g9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali904e6679f0b", MAC:"62:06:3c:87:50:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:23.529696 env[1235]: 2024-02-09 19:53:23.526 [INFO][3688] k8s.go 491: Wrote updated endpoint to datastore ContainerID="75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07" Namespace="calico-system" Pod="calico-kube-controllers-f9496d88-gm4g9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:23.546879 env[1235]: time="2024-02-09T19:53:23.546826247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:53:23.547837 env[1235]: time="2024-02-09T19:53:23.546872920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:53:23.547837 env[1235]: time="2024-02-09T19:53:23.546882082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:53:23.547837 env[1235]: time="2024-02-09T19:53:23.547102712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07 pid=3788 runtime=io.containerd.runc.v2 Feb 9 19:53:23.553000 audit[3802]: NETFILTER_CFG table=filter:116 family=2 entries=34 op=nft_register_chain pid=3802 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:23.553000 audit[3802]: SYSCALL arch=c000003e syscall=46 success=yes exit=18320 a0=3 a1=7fffe5d3f480 a2=0 a3=7fffe5d3f46c items=0 ppid=3461 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:23.553000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:23.557322 systemd-resolved[1169]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:53:23.570748 systemd-resolved[1169]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:53:23.571139 env[1235]: time="2024-02-09T19:53:23.571118522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c8lr,Uid:0895769c-c5fb-4668-b9c0-ae965d762f27,Namespace:calico-system,Attempt:1,} returns sandbox id \"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30\"" Feb 9 19:53:23.575722 env[1235]: time="2024-02-09T19:53:23.575707839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:53:23.594529 env[1235]: time="2024-02-09T19:53:23.592589476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f9496d88-gm4g9,Uid:f9468b16-76c1-4bea-9723-d2d33c9557c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07\"" Feb 9 19:53:23.762562 env[1235]: time="2024-02-09T19:53:23.762538921Z" level=info msg="StopPodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\"" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.787 [INFO][3845] k8s.go 578: Cleaning up netns ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.788 [INFO][3845] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" iface="eth0" netns="/var/run/netns/cni-b044099b-1ec3-4185-0138-5b3dfc2f6fed" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.788 [INFO][3845] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" iface="eth0" netns="/var/run/netns/cni-b044099b-1ec3-4185-0138-5b3dfc2f6fed" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.788 [INFO][3845] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" iface="eth0" netns="/var/run/netns/cni-b044099b-1ec3-4185-0138-5b3dfc2f6fed" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.788 [INFO][3845] k8s.go 585: Releasing IP address(es) ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.788 [INFO][3845] utils.go 188: Calico CNI releasing IP address ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.806 [INFO][3852] ipam_plugin.go 415: Releasing address using handleID ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.806 [INFO][3852] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.806 [INFO][3852] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.811 [WARNING][3852] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.811 [INFO][3852] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.811 [INFO][3852] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:23.813755 env[1235]: 2024-02-09 19:53:23.812 [INFO][3845] k8s.go 591: Teardown processing complete. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:23.818385 env[1235]: time="2024-02-09T19:53:23.814352848Z" level=info msg="TearDown network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" successfully" Feb 9 19:53:23.818385 env[1235]: time="2024-02-09T19:53:23.814372298Z" level=info msg="StopPodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" returns successfully" Feb 9 19:53:23.818385 env[1235]: time="2024-02-09T19:53:23.814840953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b9gw4,Uid:1251f47a-f8b9-441f-b4c8-207de9468685,Namespace:kube-system,Attempt:1,}" Feb 9 19:53:23.954757 systemd-networkd[1111]: cali7bab1a2ab66: Link UP Feb 9 19:53:23.957780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7bab1a2ab66: link becomes ready Feb 9 19:53:23.957159 systemd-networkd[1111]: cali7bab1a2ab66: Gained carrier Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.919 [INFO][3858] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--b9gw4-eth0 coredns-787d4945fb- kube-system 1251f47a-f8b9-441f-b4c8-207de9468685 689 0 2024-02-09 19:52:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-b9gw4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bab1a2ab66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.919 [INFO][3858] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.933 [INFO][3870] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" HandleID="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.939 [INFO][3870] ipam_plugin.go 268: Auto assigning IP ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" HandleID="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00023cfa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-b9gw4", "timestamp":"2024-02-09 19:53:23.933954038 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.939 [INFO][3870] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.939 [INFO][3870] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.939 [INFO][3870] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.940 [INFO][3870] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.942 [INFO][3870] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.944 [INFO][3870] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.945 [INFO][3870] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.946 [INFO][3870] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.946 [INFO][3870] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.946 [INFO][3870] ipam.go 1682: Creating new handle: k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.949 [INFO][3870] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.952 [INFO][3870] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.952 [INFO][3870] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" host="localhost" Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.952 [INFO][3870] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:23.967299 env[1235]: 2024-02-09 19:53:23.952 [INFO][3870] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" HandleID="k8s-pod-network.adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.967785 env[1235]: 2024-02-09 19:53:23.953 [INFO][3858] k8s.go 385: Populated endpoint ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b9gw4-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"1251f47a-f8b9-441f-b4c8-207de9468685", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-b9gw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bab1a2ab66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:23.967785 env[1235]: 2024-02-09 19:53:23.953 [INFO][3858] k8s.go 386: Calico CNI using IPs: [192.168.88.131/32] ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.967785 env[1235]: 2024-02-09 19:53:23.953 [INFO][3858] dataplane_linux.go 68: Setting the host side veth name to cali7bab1a2ab66 ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.967785 env[1235]: 2024-02-09 19:53:23.957 [INFO][3858] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.967785 env[1235]: 2024-02-09 19:53:23.957 [INFO][3858] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b9gw4-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"1251f47a-f8b9-441f-b4c8-207de9468685", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d", Pod:"coredns-787d4945fb-b9gw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bab1a2ab66", MAC:"f2:7e:c6:f9:cf:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:23.967785 env[1235]: 2024-02-09 19:53:23.965 [INFO][3858] k8s.go 491: Wrote updated endpoint to datastore ContainerID="adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d" Namespace="kube-system" Pod="coredns-787d4945fb-b9gw4" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:23.975190 env[1235]: time="2024-02-09T19:53:23.975077500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:53:23.975190 env[1235]: time="2024-02-09T19:53:23.975107850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:53:23.975190 env[1235]: time="2024-02-09T19:53:23.975115083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:53:23.975468 env[1235]: time="2024-02-09T19:53:23.975423476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d pid=3898 runtime=io.containerd.runc.v2 Feb 9 19:53:23.999758 systemd-resolved[1169]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:53:23.997000 audit[3928]: NETFILTER_CFG table=filter:117 family=2 entries=44 op=nft_register_chain pid=3928 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:24.000956 kernel: kauditd_printk_skb: 114 callbacks suppressed Feb 9 19:53:24.000989 kernel: audit: type=1325 audit(1707508403.997:309): table=filter:117 family=2 entries=44 op=nft_register_chain pid=3928 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:23.997000 audit[3928]: SYSCALL arch=c000003e syscall=46 success=yes exit=22284 a0=3 a1=7ffe725464a0 a2=0 a3=7ffe7254648c items=0 ppid=3461 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:24.006590 kernel: audit: type=1300 audit(1707508403.997:309): arch=c000003e syscall=46 success=yes exit=22284 a0=3 a1=7ffe725464a0 a2=0 a3=7ffe7254648c items=0 ppid=3461 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:23.997000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:24.009342 kernel: audit: type=1327 audit(1707508403.997:309): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:24.023163 env[1235]: time="2024-02-09T19:53:24.023138565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-b9gw4,Uid:1251f47a-f8b9-441f-b4c8-207de9468685,Namespace:kube-system,Attempt:1,} returns sandbox id \"adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d\"" Feb 9 19:53:24.026368 env[1235]: time="2024-02-09T19:53:24.026281546Z" level=info msg="CreateContainer within sandbox \"adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:53:24.032406 env[1235]: time="2024-02-09T19:53:24.032385488Z" level=info msg="CreateContainer within sandbox \"adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5e9601626956f53a82db537265c406f43047222802b2fae5464f5d220ac0e53\"" Feb 9 19:53:24.033430 env[1235]: time="2024-02-09T19:53:24.032691807Z" level=info msg="StartContainer for \"a5e9601626956f53a82db537265c406f43047222802b2fae5464f5d220ac0e53\"" Feb 9 19:53:24.064124 env[1235]: time="2024-02-09T19:53:24.064097209Z" level=info msg="StartContainer for \"a5e9601626956f53a82db537265c406f43047222802b2fae5464f5d220ac0e53\" returns successfully" Feb 9 19:53:24.336133 systemd[1]: run-netns-cni\x2db044099b\x2d1ec3\x2d4185\x2d0138\x2d5b3dfc2f6fed.mount: Deactivated successfully. Feb 9 19:53:24.764386 env[1235]: time="2024-02-09T19:53:24.763614980Z" level=info msg="StopPodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\"" Feb 9 19:53:24.781625 systemd-networkd[1111]: cali0c1618ef35c: Gained IPv6LL Feb 9 19:53:24.828150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977471718.mount: Deactivated successfully. Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.799 [INFO][3994] k8s.go 578: Cleaning up netns ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.799 [INFO][3994] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" iface="eth0" netns="/var/run/netns/cni-fba8ccc8-f0ff-a8ee-9d59-9759f79a6f0b" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.800 [INFO][3994] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" iface="eth0" netns="/var/run/netns/cni-fba8ccc8-f0ff-a8ee-9d59-9759f79a6f0b" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.800 [INFO][3994] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" iface="eth0" netns="/var/run/netns/cni-fba8ccc8-f0ff-a8ee-9d59-9759f79a6f0b" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.800 [INFO][3994] k8s.go 585: Releasing IP address(es) ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.800 [INFO][3994] utils.go 188: Calico CNI releasing IP address ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.831 [INFO][4000] ipam_plugin.go 415: Releasing address using handleID ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.831 [INFO][4000] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.831 [INFO][4000] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.840 [WARNING][4000] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.840 [INFO][4000] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.840 [INFO][4000] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:24.842922 env[1235]: 2024-02-09 19:53:24.841 [INFO][3994] k8s.go 591: Teardown processing complete. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:24.845377 env[1235]: time="2024-02-09T19:53:24.845190187Z" level=info msg="TearDown network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" successfully" Feb 9 19:53:24.845377 env[1235]: time="2024-02-09T19:53:24.845209087Z" level=info msg="StopPodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" returns successfully" Feb 9 19:53:24.844684 systemd[1]: run-netns-cni\x2dfba8ccc8\x2df0ff\x2da8ee\x2d9d59\x2d9759f79a6f0b.mount: Deactivated successfully. Feb 9 19:53:24.845923 env[1235]: time="2024-02-09T19:53:24.845909532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-9q2kb,Uid:fd916192-50ee-4b61-9539-fadb37ab3765,Namespace:kube-system,Attempt:1,}" Feb 9 19:53:24.925578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:53:24.925632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali35e8f95d966: link becomes ready Feb 9 19:53:24.922567 systemd-networkd[1111]: cali35e8f95d966: Link UP Feb 9 19:53:24.924594 systemd-networkd[1111]: cali35e8f95d966: Gained carrier Feb 9 19:53:24.928721 kubelet[2290]: I0209 19:53:24.928606 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-b9gw4" podStartSLOduration=33.92855907 pod.CreationTimestamp="2024-02-09 19:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:24.916948282 +0000 UTC m=+48.322314073" watchObservedRunningTime="2024-02-09 19:53:24.92855907 +0000 UTC m=+48.333924867" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.875 [INFO][4006] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--9q2kb-eth0 coredns-787d4945fb- kube-system fd916192-50ee-4b61-9539-fadb37ab3765 698 0 2024-02-09 19:52:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-9q2kb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35e8f95d966 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.875 [INFO][4006] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.893 [INFO][4017] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" HandleID="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.899 [INFO][4017] ipam_plugin.go 268: Auto assigning IP ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" HandleID="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c5de0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-9q2kb", "timestamp":"2024-02-09 19:53:24.893279842 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.899 [INFO][4017] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.899 [INFO][4017] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.899 [INFO][4017] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.900 [INFO][4017] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.902 [INFO][4017] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.904 [INFO][4017] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.905 [INFO][4017] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.906 [INFO][4017] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.906 [INFO][4017] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.907 [INFO][4017] ipam.go 1682: Creating new handle: k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3 Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.912 [INFO][4017] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.919 [INFO][4017] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.919 [INFO][4017] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" host="localhost" Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.919 [INFO][4017] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:24.942372 env[1235]: 2024-02-09 19:53:24.919 [INFO][4017] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" HandleID="k8s-pod-network.51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.943020 env[1235]: 2024-02-09 19:53:24.921 [INFO][4006] k8s.go 385: Populated endpoint ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--9q2kb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fd916192-50ee-4b61-9539-fadb37ab3765", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-9q2kb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35e8f95d966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:24.943020 env[1235]: 2024-02-09 19:53:24.921 [INFO][4006] k8s.go 386: Calico CNI using IPs: [192.168.88.132/32] ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.943020 env[1235]: 2024-02-09 19:53:24.921 [INFO][4006] dataplane_linux.go 68: Setting the host side veth name to cali35e8f95d966 ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.943020 env[1235]: 2024-02-09 19:53:24.924 [INFO][4006] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.943020 env[1235]: 2024-02-09 19:53:24.926 [INFO][4006] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--9q2kb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fd916192-50ee-4b61-9539-fadb37ab3765", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3", Pod:"coredns-787d4945fb-9q2kb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35e8f95d966", MAC:"26:b2:24:1d:70:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:24.943020 env[1235]: 2024-02-09 19:53:24.940 [INFO][4006] k8s.go 491: Wrote updated endpoint to datastore ContainerID="51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3" Namespace="kube-system" Pod="coredns-787d4945fb-9q2kb" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:24.956330 env[1235]: time="2024-02-09T19:53:24.956294533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:53:24.956450 env[1235]: time="2024-02-09T19:53:24.956436398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:53:24.956523 env[1235]: time="2024-02-09T19:53:24.956510615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:53:24.957073 env[1235]: time="2024-02-09T19:53:24.956668485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3 pid=4046 runtime=io.containerd.runc.v2 Feb 9 19:53:24.974596 kernel: audit: type=1325 audit(1707508404.965:310): table=filter:118 family=2 entries=38 op=nft_register_chain pid=4063 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:24.974696 kernel: audit: type=1300 audit(1707508404.965:310): arch=c000003e syscall=46 success=yes exit=19088 a0=3 a1=7ffd85e579f0 a2=0 a3=7ffd85e579dc items=0 ppid=3461 pid=4063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:24.974713 kernel: audit: type=1327 audit(1707508404.965:310): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:24.965000 audit[4063]: NETFILTER_CFG table=filter:118 family=2 entries=38 op=nft_register_chain pid=4063 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:24.965000 audit[4063]: SYSCALL arch=c000003e syscall=46 success=yes exit=19088 a0=3 a1=7ffd85e579f0 a2=0 a3=7ffd85e579dc items=0 ppid=3461 pid=4063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:24.965000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:24.982623 systemd-resolved[1169]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:53:25.037266 env[1235]: time="2024-02-09T19:53:25.037208348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-9q2kb,Uid:fd916192-50ee-4b61-9539-fadb37ab3765,Namespace:kube-system,Attempt:1,} returns sandbox id \"51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3\"" Feb 9 19:53:25.044584 env[1235]: time="2024-02-09T19:53:25.044566477Z" level=info msg="CreateContainer within sandbox \"51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:53:25.207000 audit[4107]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:25.216872 kernel: audit: type=1325 audit(1707508405.207:311): table=filter:119 family=2 entries=9 op=nft_register_rule pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:25.216916 kernel: audit: type=1300 audit(1707508405.207:311): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe5f3eb630 a2=0 a3=7ffe5f3eb61c items=0 ppid=2445 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:25.207000 audit[4107]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe5f3eb630 a2=0 a3=7ffe5f3eb61c items=0 ppid=2445 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:25.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:25.219487 kernel: audit: type=1327 audit(1707508405.207:311): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:25.225000 audit[4107]: NETFILTER_CFG table=nat:120 family=2 entries=51 op=nft_register_chain pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:25.225000 audit[4107]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffe5f3eb630 a2=0 a3=7ffe5f3eb61c items=0 ppid=2445 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:25.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:25.230582 kernel: audit: type=1325 audit(1707508405.225:312): table=nat:120 family=2 entries=51 op=nft_register_chain pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:25.240910 env[1235]: time="2024-02-09T19:53:25.240879031Z" level=info msg="CreateContainer within sandbox \"51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"257844ce36a59433755f0e251a7890405a2d9548b7c8d878876536f480a1ceb0\"" Feb 9 19:53:25.244148 env[1235]: time="2024-02-09T19:53:25.242862045Z" level=info msg="StartContainer for \"257844ce36a59433755f0e251a7890405a2d9548b7c8d878876536f480a1ceb0\"" Feb 9 19:53:25.304000 audit[4168]: NETFILTER_CFG table=filter:121 family=2 entries=6 op=nft_register_rule pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:25.304000 audit[4168]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff1131b9a0 a2=0 a3=7fff1131b98c items=0 ppid=2445 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:25.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:25.305000 audit[4168]: NETFILTER_CFG table=nat:122 family=2 entries=60 op=nft_register_rule pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:25.305000 audit[4168]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7fff1131b9a0 a2=0 a3=7fff1131b98c items=0 ppid=2445 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:25.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:25.319952 env[1235]: time="2024-02-09T19:53:25.319927031Z" level=info msg="StartContainer for \"257844ce36a59433755f0e251a7890405a2d9548b7c8d878876536f480a1ceb0\" returns successfully" Feb 9 19:53:25.357704 systemd-networkd[1111]: cali904e6679f0b: Gained IPv6LL Feb 9 19:53:25.382410 env[1235]: time="2024-02-09T19:53:25.382374984Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:25.382848 env[1235]: time="2024-02-09T19:53:25.382833241Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:25.383665 env[1235]: time="2024-02-09T19:53:25.383652363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:25.384405 env[1235]: time="2024-02-09T19:53:25.384393225Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:25.384808 env[1235]: time="2024-02-09T19:53:25.384794737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:53:25.386098 env[1235]: time="2024-02-09T19:53:25.385410380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:53:25.386800 env[1235]: time="2024-02-09T19:53:25.386782884Z" level=info msg="CreateContainer within sandbox \"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:53:25.394643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209420179.mount: Deactivated successfully. Feb 9 19:53:25.396449 env[1235]: time="2024-02-09T19:53:25.396426789Z" level=info msg="CreateContainer within sandbox \"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3386453da1372abd8790f0f8b53716c9e161f7a3550cadbb73f87bd6f5e9dc49\"" Feb 9 19:53:25.406418 env[1235]: time="2024-02-09T19:53:25.406400428Z" level=info msg="StartContainer for \"3386453da1372abd8790f0f8b53716c9e161f7a3550cadbb73f87bd6f5e9dc49\"" Feb 9 19:53:25.456447 env[1235]: time="2024-02-09T19:53:25.456421013Z" level=info msg="StartContainer for \"3386453da1372abd8790f0f8b53716c9e161f7a3550cadbb73f87bd6f5e9dc49\" returns successfully" Feb 9 19:53:25.921721 kubelet[2290]: I0209 19:53:25.920705 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-9q2kb" podStartSLOduration=34.920682763 pod.CreationTimestamp="2024-02-09 19:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:25.920416779 +0000 UTC m=+49.325782576" watchObservedRunningTime="2024-02-09 19:53:25.920682763 +0000 UTC m=+49.326048550" Feb 9 19:53:25.997660 systemd-networkd[1111]: cali7bab1a2ab66: Gained IPv6LL Feb 9 19:53:26.009000 audit[4232]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:26.009000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffea03d16b0 a2=0 a3=7ffea03d169c items=0 ppid=2445 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:26.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:26.054000 audit[4232]: NETFILTER_CFG table=nat:124 family=2 entries=72 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:26.054000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffea03d16b0 a2=0 a3=7ffea03d169c items=0 ppid=2445 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:26.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:26.333983 systemd[1]: run-containerd-runc-k8s.io-3386453da1372abd8790f0f8b53716c9e161f7a3550cadbb73f87bd6f5e9dc49-runc.9EoJ47.mount: Deactivated successfully. Feb 9 19:53:26.381740 systemd-networkd[1111]: cali35e8f95d966: Gained IPv6LL Feb 9 19:53:26.776844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410573317.mount: Deactivated successfully. Feb 9 19:53:28.024294 env[1235]: time="2024-02-09T19:53:28.024263038Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:28.025634 env[1235]: time="2024-02-09T19:53:28.025622418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:28.026910 env[1235]: time="2024-02-09T19:53:28.026898402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:28.028121 env[1235]: time="2024-02-09T19:53:28.028109537Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:28.028671 env[1235]: time="2024-02-09T19:53:28.028652449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 19:53:28.029642 env[1235]: time="2024-02-09T19:53:28.029623944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:53:28.043263 env[1235]: time="2024-02-09T19:53:28.043241963Z" level=info msg="CreateContainer within sandbox \"75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:53:28.048820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298562508.mount: Deactivated successfully. Feb 9 19:53:28.050728 env[1235]: time="2024-02-09T19:53:28.050712032Z" level=info msg="CreateContainer within sandbox \"75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bb1ad0e714122f3de805f3d0eda0ed20c91dda15c859bb2c4c9eccfea3788e94\"" Feb 9 19:53:28.051603 env[1235]: time="2024-02-09T19:53:28.051578064Z" level=info msg="StartContainer for \"bb1ad0e714122f3de805f3d0eda0ed20c91dda15c859bb2c4c9eccfea3788e94\"" Feb 9 19:53:28.101017 env[1235]: time="2024-02-09T19:53:28.100991715Z" level=info msg="StartContainer for \"bb1ad0e714122f3de805f3d0eda0ed20c91dda15c859bb2c4c9eccfea3788e94\" returns successfully" Feb 9 19:53:28.933308 kubelet[2290]: I0209 19:53:28.933289 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f9496d88-gm4g9" podStartSLOduration=-9.223372005921515e+09 pod.CreationTimestamp="2024-02-09 19:52:58 +0000 UTC" firstStartedPulling="2024-02-09 19:53:23.595174961 +0000 UTC m=+47.000540747" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:28.932584786 +0000 UTC m=+52.337950584" watchObservedRunningTime="2024-02-09 19:53:28.933261099 +0000 UTC m=+52.338626895" Feb 9 19:53:29.553496 env[1235]: time="2024-02-09T19:53:29.553465940Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:29.554725 env[1235]: time="2024-02-09T19:53:29.554711998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:29.555750 env[1235]: time="2024-02-09T19:53:29.555738939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:29.556637 env[1235]: time="2024-02-09T19:53:29.556625346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:29.556974 env[1235]: time="2024-02-09T19:53:29.556959276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:53:29.559004 env[1235]: time="2024-02-09T19:53:29.558988614Z" level=info msg="CreateContainer within sandbox \"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:53:29.567614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885607007.mount: Deactivated successfully. Feb 9 19:53:29.571909 env[1235]: time="2024-02-09T19:53:29.571888184Z" level=info msg="CreateContainer within sandbox \"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"88db639f22c3cace434dcba41a7174ce9dacc06ff9f1ef4ac5fc77603ebd9237\"" Feb 9 19:53:29.573360 env[1235]: time="2024-02-09T19:53:29.573342346Z" level=info msg="StartContainer for \"88db639f22c3cace434dcba41a7174ce9dacc06ff9f1ef4ac5fc77603ebd9237\"" Feb 9 19:53:29.607168 env[1235]: time="2024-02-09T19:53:29.607139723Z" level=info msg="StartContainer for \"88db639f22c3cace434dcba41a7174ce9dacc06ff9f1ef4ac5fc77603ebd9237\" returns successfully" Feb 9 19:53:29.812967 kubelet[2290]: I0209 19:53:29.812912 2290 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:53:29.813513 kubelet[2290]: I0209 19:53:29.813505 2290 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:53:29.934100 kubelet[2290]: I0209 19:53:29.934074 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7c8lr" podStartSLOduration=-9.223372004920725e+09 pod.CreationTimestamp="2024-02-09 19:52:58 +0000 UTC" firstStartedPulling="2024-02-09 19:53:23.575336237 +0000 UTC m=+46.980702021" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:29.93255997 +0000 UTC m=+53.337925760" watchObservedRunningTime="2024-02-09 19:53:29.93405088 +0000 UTC m=+53.339416676" Feb 9 19:53:36.702905 env[1235]: time="2024-02-09T19:53:36.702640042Z" level=info msg="StopPodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\"" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.053 [WARNING][4354] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--9q2kb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fd916192-50ee-4b61-9539-fadb37ab3765", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3", Pod:"coredns-787d4945fb-9q2kb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35e8f95d966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.053 [INFO][4354] k8s.go 578: Cleaning up netns ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.054 [INFO][4354] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" iface="eth0" netns="" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.054 [INFO][4354] k8s.go 585: Releasing IP address(es) ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.054 [INFO][4354] utils.go 188: Calico CNI releasing IP address ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.071 [INFO][4362] ipam_plugin.go 415: Releasing address using handleID ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.071 [INFO][4362] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.071 [INFO][4362] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.075 [WARNING][4362] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.075 [INFO][4362] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.076 [INFO][4362] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.078744 env[1235]: 2024-02-09 19:53:37.077 [INFO][4354] k8s.go 591: Teardown processing complete. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.080413 env[1235]: time="2024-02-09T19:53:37.078770708Z" level=info msg="TearDown network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" successfully" Feb 9 19:53:37.080413 env[1235]: time="2024-02-09T19:53:37.078791974Z" level=info msg="StopPodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" returns successfully" Feb 9 19:53:37.080413 env[1235]: time="2024-02-09T19:53:37.080196017Z" level=info msg="RemovePodSandbox for \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\"" Feb 9 19:53:37.080413 env[1235]: time="2024-02-09T19:53:37.080215231Z" level=info msg="Forcibly stopping sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\"" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.112 [WARNING][4380] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--9q2kb-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fd916192-50ee-4b61-9539-fadb37ab3765", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51c350f1956129f79414a34205de8787a9d4033b187c3d74f0e8c66b141cc3c3", Pod:"coredns-787d4945fb-9q2kb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35e8f95d966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.112 [INFO][4380] k8s.go 578: Cleaning up netns ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.112 [INFO][4380] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" iface="eth0" netns="" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.112 [INFO][4380] k8s.go 585: Releasing IP address(es) ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.113 [INFO][4380] utils.go 188: Calico CNI releasing IP address ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.136 [INFO][4387] ipam_plugin.go 415: Releasing address using handleID ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.136 [INFO][4387] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.136 [INFO][4387] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.140 [WARNING][4387] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.140 [INFO][4387] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" HandleID="k8s-pod-network.1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Workload="localhost-k8s-coredns--787d4945fb--9q2kb-eth0" Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.141 [INFO][4387] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.143666 env[1235]: 2024-02-09 19:53:37.142 [INFO][4380] k8s.go 591: Teardown processing complete. ContainerID="1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b" Feb 9 19:53:37.155183 env[1235]: time="2024-02-09T19:53:37.143682323Z" level=info msg="TearDown network for sandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" successfully" Feb 9 19:53:37.155183 env[1235]: time="2024-02-09T19:53:37.155122127Z" level=info msg="RemovePodSandbox \"1ef6ab578b158ffbf8386d1817aa503447ea1a9ec76ca3919ee03f40452b959b\" returns successfully" Feb 9 19:53:37.161396 env[1235]: time="2024-02-09T19:53:37.155440390Z" level=info msg="StopPodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\"" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.186 [WARNING][4406] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7c8lr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0895769c-c5fb-4668-b9c0-ae965d762f27", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30", Pod:"csi-node-driver-7c8lr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c1618ef35c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.186 [INFO][4406] k8s.go 578: Cleaning up netns ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.186 [INFO][4406] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" iface="eth0" netns="" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.186 [INFO][4406] k8s.go 585: Releasing IP address(es) ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.186 [INFO][4406] utils.go 188: Calico CNI releasing IP address ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.197 [INFO][4412] ipam_plugin.go 415: Releasing address using handleID ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.198 [INFO][4412] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.198 [INFO][4412] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.202 [WARNING][4412] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.202 [INFO][4412] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.202 [INFO][4412] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.206564 env[1235]: 2024-02-09 19:53:37.203 [INFO][4406] k8s.go 591: Teardown processing complete. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.206564 env[1235]: time="2024-02-09T19:53:37.206048339Z" level=info msg="TearDown network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" successfully" Feb 9 19:53:37.206564 env[1235]: time="2024-02-09T19:53:37.206068639Z" level=info msg="StopPodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" returns successfully" Feb 9 19:53:37.214440 env[1235]: time="2024-02-09T19:53:37.206961467Z" level=info msg="RemovePodSandbox for \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\"" Feb 9 19:53:37.214440 env[1235]: time="2024-02-09T19:53:37.206979454Z" level=info msg="Forcibly stopping sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\"" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.228 [WARNING][4431] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7c8lr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0895769c-c5fb-4668-b9c0-ae965d762f27", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20bb70f0a2160a5d1b2100f742c7de7eb1b5727253ec29c3531bec9a574e4d30", Pod:"csi-node-driver-7c8lr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c1618ef35c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.228 [INFO][4431] k8s.go 578: Cleaning up netns ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.228 [INFO][4431] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" iface="eth0" netns="" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.228 [INFO][4431] k8s.go 585: Releasing IP address(es) ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.228 [INFO][4431] utils.go 188: Calico CNI releasing IP address ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.238 [INFO][4437] ipam_plugin.go 415: Releasing address using handleID ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.238 [INFO][4437] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.239 [INFO][4437] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.242 [WARNING][4437] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.242 [INFO][4437] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" HandleID="k8s-pod-network.ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Workload="localhost-k8s-csi--node--driver--7c8lr-eth0" Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.243 [INFO][4437] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.245547 env[1235]: 2024-02-09 19:53:37.244 [INFO][4431] k8s.go 591: Teardown processing complete. ContainerID="ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60" Feb 9 19:53:37.245894 env[1235]: time="2024-02-09T19:53:37.245570516Z" level=info msg="TearDown network for sandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" successfully" Feb 9 19:53:37.262502 env[1235]: time="2024-02-09T19:53:37.262478055Z" level=info msg="RemovePodSandbox \"ba52d338d369ef9d8866575e8de36f1d9975c977bfb1feab3fc328e2c8a15a60\" returns successfully" Feb 9 19:53:37.268873 env[1235]: time="2024-02-09T19:53:37.262827670Z" level=info msg="StopPodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\"" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.291 [WARNING][4456] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b9gw4-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"1251f47a-f8b9-441f-b4c8-207de9468685", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d", Pod:"coredns-787d4945fb-b9gw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bab1a2ab66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.291 [INFO][4456] k8s.go 578: Cleaning up netns ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.291 [INFO][4456] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" iface="eth0" netns="" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.291 [INFO][4456] k8s.go 585: Releasing IP address(es) ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.291 [INFO][4456] utils.go 188: Calico CNI releasing IP address ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.306 [INFO][4462] ipam_plugin.go 415: Releasing address using handleID ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.306 [INFO][4462] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.306 [INFO][4462] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.312 [WARNING][4462] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.312 [INFO][4462] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.312 [INFO][4462] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.315170 env[1235]: 2024-02-09 19:53:37.314 [INFO][4456] k8s.go 591: Teardown processing complete. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.316702 env[1235]: time="2024-02-09T19:53:37.315189651Z" level=info msg="TearDown network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" successfully" Feb 9 19:53:37.316702 env[1235]: time="2024-02-09T19:53:37.315217118Z" level=info msg="StopPodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" returns successfully" Feb 9 19:53:37.316702 env[1235]: time="2024-02-09T19:53:37.315573479Z" level=info msg="RemovePodSandbox for \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\"" Feb 9 19:53:37.316702 env[1235]: time="2024-02-09T19:53:37.315596249Z" level=info msg="Forcibly stopping sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\"" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.343 [WARNING][4480] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--b9gw4-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"1251f47a-f8b9-441f-b4c8-207de9468685", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"adf0c2fc9b2e150e93769cddacbdd354613c1a9f8b2eceab8aeeb173f5d9944d", Pod:"coredns-787d4945fb-b9gw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bab1a2ab66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.344 [INFO][4480] k8s.go 578: Cleaning up netns ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.344 [INFO][4480] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" iface="eth0" netns="" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.344 [INFO][4480] k8s.go 585: Releasing IP address(es) ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.344 [INFO][4480] utils.go 188: Calico CNI releasing IP address ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.358 [INFO][4487] ipam_plugin.go 415: Releasing address using handleID ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.358 [INFO][4487] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.358 [INFO][4487] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.362 [WARNING][4487] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.362 [INFO][4487] ipam_plugin.go 443: Releasing address using workloadID ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" HandleID="k8s-pod-network.5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Workload="localhost-k8s-coredns--787d4945fb--b9gw4-eth0" Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.363 [INFO][4487] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.367559 env[1235]: 2024-02-09 19:53:37.364 [INFO][4480] k8s.go 591: Teardown processing complete. ContainerID="5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7" Feb 9 19:53:37.367559 env[1235]: time="2024-02-09T19:53:37.366053025Z" level=info msg="TearDown network for sandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" successfully" Feb 9 19:53:37.372380 env[1235]: time="2024-02-09T19:53:37.372347159Z" level=info msg="RemovePodSandbox \"5f14aa1893d3ed31042b5e13dfb68258f0965f9936fd066f7f7bae2156bd40b7\" returns successfully" Feb 9 19:53:37.372740 env[1235]: time="2024-02-09T19:53:37.372691004Z" level=info msg="StopPodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\"" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.392 [WARNING][4505] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0", GenerateName:"calico-kube-controllers-f9496d88-", Namespace:"calico-system", SelfLink:"", UID:"f9468b16-76c1-4bea-9723-d2d33c9557c7", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f9496d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07", Pod:"calico-kube-controllers-f9496d88-gm4g9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali904e6679f0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.393 [INFO][4505] k8s.go 578: Cleaning up netns ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.393 [INFO][4505] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" iface="eth0" netns="" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.393 [INFO][4505] k8s.go 585: Releasing IP address(es) ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.393 [INFO][4505] utils.go 188: Calico CNI releasing IP address ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.404 [INFO][4511] ipam_plugin.go 415: Releasing address using handleID ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.404 [INFO][4511] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.404 [INFO][4511] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.408 [WARNING][4511] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.408 [INFO][4511] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.409 [INFO][4511] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.410855 env[1235]: 2024-02-09 19:53:37.409 [INFO][4505] k8s.go 591: Teardown processing complete. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.422091 env[1235]: time="2024-02-09T19:53:37.410890683Z" level=info msg="TearDown network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" successfully" Feb 9 19:53:37.422091 env[1235]: time="2024-02-09T19:53:37.410911764Z" level=info msg="StopPodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" returns successfully" Feb 9 19:53:37.422091 env[1235]: time="2024-02-09T19:53:37.411412387Z" level=info msg="RemovePodSandbox for \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\"" Feb 9 19:53:37.422091 env[1235]: time="2024-02-09T19:53:37.411429193Z" level=info msg="Forcibly stopping sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\"" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.438 [WARNING][4529] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0", GenerateName:"calico-kube-controllers-f9496d88-", Namespace:"calico-system", SelfLink:"", UID:"f9468b16-76c1-4bea-9723-d2d33c9557c7", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f9496d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75fdc65b36b5bc27f80b4e36d80fc7ac2f7dd85441624c1221802d96bd9c5b07", Pod:"calico-kube-controllers-f9496d88-gm4g9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali904e6679f0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.439 [INFO][4529] k8s.go 578: Cleaning up netns ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.439 [INFO][4529] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" iface="eth0" netns="" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.439 [INFO][4529] k8s.go 585: Releasing IP address(es) ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.439 [INFO][4529] utils.go 188: Calico CNI releasing IP address ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.450 [INFO][4535] ipam_plugin.go 415: Releasing address using handleID ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.450 [INFO][4535] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.450 [INFO][4535] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.454 [WARNING][4535] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.454 [INFO][4535] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" HandleID="k8s-pod-network.0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Workload="localhost-k8s-calico--kube--controllers--f9496d88--gm4g9-eth0" Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.454 [INFO][4535] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:37.456867 env[1235]: 2024-02-09 19:53:37.455 [INFO][4529] k8s.go 591: Teardown processing complete. ContainerID="0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083" Feb 9 19:53:37.464887 env[1235]: time="2024-02-09T19:53:37.457281862Z" level=info msg="TearDown network for sandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" successfully" Feb 9 19:53:37.471793 env[1235]: time="2024-02-09T19:53:37.471773058Z" level=info msg="RemovePodSandbox \"0fcf25b1a9892c38a889eaffc72aef411125cddc2c3d109f59a801d5eb1d7083\" returns successfully" Feb 9 19:53:41.322360 systemd[1]: run-containerd-runc-k8s.io-bb1ad0e714122f3de805f3d0eda0ed20c91dda15c859bb2c4c9eccfea3788e94-runc.xjt7E4.mount: Deactivated successfully. Feb 9 19:53:41.748521 systemd[1]: run-containerd-runc-k8s.io-874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296-runc.qZ6r4l.mount: Deactivated successfully. Feb 9 19:53:42.776885 systemd[1]: run-containerd-runc-k8s.io-bb1ad0e714122f3de805f3d0eda0ed20c91dda15c859bb2c4c9eccfea3788e94-runc.DXpaDz.mount: Deactivated successfully. Feb 9 19:53:43.686000 audit[4636]: NETFILTER_CFG table=filter:125 family=2 entries=7 op=nft_register_rule pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.694975 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 19:53:43.695026 kernel: audit: type=1325 audit(1707508423.686:317): table=filter:125 family=2 entries=7 op=nft_register_rule pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.700575 kernel: audit: type=1300 audit(1707508423.686:317): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcdcd7a020 a2=0 a3=7ffcdcd7a00c items=0 ppid=2445 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.700613 kernel: audit: type=1327 audit(1707508423.686:317): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.686000 audit[4636]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffcdcd7a020 a2=0 a3=7ffcdcd7a00c items=0 ppid=2445 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.716846 kernel: audit: type=1325 audit(1707508423.698:318): table=nat:126 family=2 entries=78 op=nft_register_rule pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.716898 kernel: audit: type=1300 audit(1707508423.698:318): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcdcd7a020 a2=0 a3=7ffcdcd7a00c items=0 ppid=2445 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.716915 kernel: audit: type=1327 audit(1707508423.698:318): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.698000 audit[4636]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.698000 audit[4636]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcdcd7a020 a2=0 a3=7ffcdcd7a00c items=0 ppid=2445 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.737000 audit[4662]: NETFILTER_CFG table=filter:127 family=2 entries=8 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.737000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe499a6ac0 a2=0 a3=7ffe499a6aac items=0 ppid=2445 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.743864 kernel: audit: type=1325 audit(1707508423.737:319): table=filter:127 family=2 entries=8 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.743910 kernel: audit: type=1300 audit(1707508423.737:319): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe499a6ac0 a2=0 a3=7ffe499a6aac items=0 ppid=2445 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.743927 kernel: audit: type=1327 audit(1707508423.737:319): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.737000 audit[4662]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.737000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe499a6ac0 a2=0 a3=7ffe499a6aac items=0 ppid=2445 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:43.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:43.751558 kernel: audit: type=1325 audit(1707508423.737:320): table=nat:128 family=2 entries=78 op=nft_register_rule pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:43.869969 kubelet[2290]: I0209 19:53:43.869941 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:53:43.882306 kubelet[2290]: I0209 19:53:43.882283 2290 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:53:44.005418 kubelet[2290]: I0209 19:53:44.005391 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcbgq\" (UniqueName: \"kubernetes.io/projected/f2ba577c-9618-4373-b6ae-f71e3310c3b1-kube-api-access-gcbgq\") pod \"calico-apiserver-b69977687-gxsb8\" (UID: \"f2ba577c-9618-4373-b6ae-f71e3310c3b1\") " pod="calico-apiserver/calico-apiserver-b69977687-gxsb8" Feb 9 19:53:44.005688 kubelet[2290]: I0209 19:53:44.005676 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00540ab5-fdde-4403-bb4c-ca714f7bff33-calico-apiserver-certs\") pod \"calico-apiserver-b69977687-pbx7x\" (UID: \"00540ab5-fdde-4403-bb4c-ca714f7bff33\") " pod="calico-apiserver/calico-apiserver-b69977687-pbx7x" Feb 9 19:53:44.005784 kubelet[2290]: I0209 19:53:44.005774 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbrf\" (UniqueName: \"kubernetes.io/projected/00540ab5-fdde-4403-bb4c-ca714f7bff33-kube-api-access-vfbrf\") pod \"calico-apiserver-b69977687-pbx7x\" (UID: \"00540ab5-fdde-4403-bb4c-ca714f7bff33\") " pod="calico-apiserver/calico-apiserver-b69977687-pbx7x" Feb 9 19:53:44.005862 kubelet[2290]: I0209 19:53:44.005853 2290 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2ba577c-9618-4373-b6ae-f71e3310c3b1-calico-apiserver-certs\") pod \"calico-apiserver-b69977687-gxsb8\" (UID: \"f2ba577c-9618-4373-b6ae-f71e3310c3b1\") " pod="calico-apiserver/calico-apiserver-b69977687-gxsb8" Feb 9 19:53:44.506549 env[1235]: time="2024-02-09T19:53:44.506475278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b69977687-gxsb8,Uid:f2ba577c-9618-4373-b6ae-f71e3310c3b1,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:53:44.507183 env[1235]: time="2024-02-09T19:53:44.507081401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b69977687-pbx7x,Uid:00540ab5-fdde-4403-bb4c-ca714f7bff33,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:53:44.675140 systemd-networkd[1111]: califc429fa19e0: Link UP Feb 9 19:53:44.714513 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:53:44.714604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califc429fa19e0: link becomes ready Feb 9 19:53:44.676939 systemd-networkd[1111]: califc429fa19e0: Gained carrier Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.580 [INFO][4676] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0 calico-apiserver-b69977687- calico-apiserver 00540ab5-fdde-4403-bb4c-ca714f7bff33 834 0 2024-02-09 19:53:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b69977687 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b69977687-pbx7x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc429fa19e0 [] []}} ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.580 [INFO][4676] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.615 [INFO][4694] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" HandleID="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Workload="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.624 [INFO][4694] ipam_plugin.go 268: Auto assigning IP ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" HandleID="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Workload="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f07e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b69977687-pbx7x", "timestamp":"2024-02-09 19:53:44.615743355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.624 [INFO][4694] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.624 [INFO][4694] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.624 [INFO][4694] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.624 [INFO][4694] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.638 [INFO][4694] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.639 [INFO][4694] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.640 [INFO][4694] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.641 [INFO][4694] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.641 [INFO][4694] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.642 [INFO][4694] ipam.go 1682: Creating new handle: k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5 Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.644 [INFO][4694] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.648 [INFO][4694] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.648 [INFO][4694] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" host="localhost" Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.648 [INFO][4694] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:44.722234 env[1235]: 2024-02-09 19:53:44.648 [INFO][4694] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" HandleID="k8s-pod-network.79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Workload="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.731748 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali49b757be0c4: link becomes ready Feb 9 19:53:44.731794 env[1235]: 2024-02-09 19:53:44.650 [INFO][4676] k8s.go 385: Populated endpoint ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0", GenerateName:"calico-apiserver-b69977687-", Namespace:"calico-apiserver", SelfLink:"", UID:"00540ab5-fdde-4403-bb4c-ca714f7bff33", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 53, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b69977687", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b69977687-pbx7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc429fa19e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:44.731794 env[1235]: 2024-02-09 19:53:44.650 [INFO][4676] k8s.go 386: Calico CNI using IPs: [192.168.88.133/32] ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.731794 env[1235]: 2024-02-09 19:53:44.650 [INFO][4676] dataplane_linux.go 68: Setting the host side veth name to califc429fa19e0 ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.731794 env[1235]: 2024-02-09 19:53:44.677 [INFO][4676] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.731794 env[1235]: 2024-02-09 19:53:44.677 [INFO][4676] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0", GenerateName:"calico-apiserver-b69977687-", Namespace:"calico-apiserver", SelfLink:"", UID:"00540ab5-fdde-4403-bb4c-ca714f7bff33", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 53, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b69977687", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5", Pod:"calico-apiserver-b69977687-pbx7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc429fa19e0", MAC:"e6:ed:1d:73:68:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:44.731794 env[1235]: 2024-02-09 19:53:44.715 [INFO][4676] k8s.go 491: Wrote updated endpoint to datastore ContainerID="79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-pbx7x" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--pbx7x-eth0" Feb 9 19:53:44.727409 systemd-networkd[1111]: cali49b757be0c4: Link UP Feb 9 19:53:44.728971 systemd-networkd[1111]: cali49b757be0c4: Gained carrier Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.581 [INFO][4669] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0 calico-apiserver-b69977687- calico-apiserver f2ba577c-9618-4373-b6ae-f71e3310c3b1 833 0 2024-02-09 19:53:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b69977687 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b69977687-gxsb8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali49b757be0c4 [] []}} ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.581 [INFO][4669] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.641 [INFO][4699] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" HandleID="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Workload="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.665 [INFO][4699] ipam_plugin.go 268: Auto assigning IP ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" HandleID="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Workload="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002befb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b69977687-gxsb8", "timestamp":"2024-02-09 19:53:44.641359302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.665 [INFO][4699] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.665 [INFO][4699] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.665 [INFO][4699] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.666 [INFO][4699] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.677 [INFO][4699] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.682 [INFO][4699] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.684 [INFO][4699] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.714 [INFO][4699] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.714 [INFO][4699] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.715 [INFO][4699] ipam.go 1682: Creating new handle: k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.717 [INFO][4699] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.723 [INFO][4699] ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.723 [INFO][4699] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" host="localhost" Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.724 [INFO][4699] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:53:44.779226 env[1235]: 2024-02-09 19:53:44.724 [INFO][4699] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" HandleID="k8s-pod-network.7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Workload="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.781384 env[1235]: 2024-02-09 19:53:44.725 [INFO][4669] k8s.go 385: Populated endpoint ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0", GenerateName:"calico-apiserver-b69977687-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2ba577c-9618-4373-b6ae-f71e3310c3b1", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 53, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b69977687", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b69977687-gxsb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49b757be0c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:44.781384 env[1235]: 2024-02-09 19:53:44.725 [INFO][4669] k8s.go 386: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.781384 env[1235]: 2024-02-09 19:53:44.725 [INFO][4669] dataplane_linux.go 68: Setting the host side veth name to cali49b757be0c4 ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.781384 env[1235]: 2024-02-09 19:53:44.728 [INFO][4669] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.781384 env[1235]: 2024-02-09 19:53:44.729 [INFO][4669] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0", GenerateName:"calico-apiserver-b69977687-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2ba577c-9618-4373-b6ae-f71e3310c3b1", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 53, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b69977687", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c", Pod:"calico-apiserver-b69977687-gxsb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49b757be0c4", MAC:"6e:0c:7d:5d:f9:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:53:44.781384 env[1235]: 2024-02-09 19:53:44.766 [INFO][4669] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c" Namespace="calico-apiserver" Pod="calico-apiserver-b69977687-gxsb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--b69977687--gxsb8-eth0" Feb 9 19:53:44.856280 env[1235]: time="2024-02-09T19:53:44.856239228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:53:44.856384 env[1235]: time="2024-02-09T19:53:44.856283894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:53:44.856384 env[1235]: time="2024-02-09T19:53:44.856300383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:53:44.856459 env[1235]: time="2024-02-09T19:53:44.856385392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c pid=4753 runtime=io.containerd.runc.v2 Feb 9 19:53:44.856000 audit[4741]: NETFILTER_CFG table=filter:129 family=2 entries=85 op=nft_register_chain pid=4741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:53:44.856000 audit[4741]: SYSCALL arch=c000003e syscall=46 success=yes exit=46104 a0=3 a1=7ffd3e46e8c0 a2=0 a3=7ffd3e46e8ac items=0 ppid=3461 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:44.856000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:53:44.864389 env[1235]: time="2024-02-09T19:53:44.859756687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:53:44.864389 env[1235]: time="2024-02-09T19:53:44.859780605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:53:44.864389 env[1235]: time="2024-02-09T19:53:44.859787611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:53:44.864389 env[1235]: time="2024-02-09T19:53:44.860031175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5 pid=4750 runtime=io.containerd.runc.v2 Feb 9 19:53:44.884537 systemd-resolved[1169]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:53:44.884798 systemd-resolved[1169]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:53:44.905327 env[1235]: time="2024-02-09T19:53:44.904846569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b69977687-pbx7x,Uid:00540ab5-fdde-4403-bb4c-ca714f7bff33,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5\"" Feb 9 19:53:44.910761 env[1235]: time="2024-02-09T19:53:44.910740439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:53:44.918660 env[1235]: time="2024-02-09T19:53:44.916049730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b69977687-gxsb8,Uid:f2ba577c-9618-4373-b6ae-f71e3310c3b1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c\"" Feb 9 19:53:45.965678 systemd-networkd[1111]: cali49b757be0c4: Gained IPv6LL Feb 9 19:53:46.285737 systemd-networkd[1111]: califc429fa19e0: Gained IPv6LL Feb 9 19:53:46.381036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170941546.mount: Deactivated successfully. Feb 9 19:53:47.985805 env[1235]: time="2024-02-09T19:53:47.985763025Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:47.987214 env[1235]: time="2024-02-09T19:53:47.987194468Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:47.988815 env[1235]: time="2024-02-09T19:53:47.988800717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:47.990562 env[1235]: time="2024-02-09T19:53:47.990531387Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:47.992302 env[1235]: time="2024-02-09T19:53:47.992282866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:53:47.994611 env[1235]: time="2024-02-09T19:53:47.994592601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:53:47.996045 env[1235]: time="2024-02-09T19:53:47.995563085Z" level=info msg="CreateContainer within sandbox \"79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:53:48.006937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292881832.mount: Deactivated successfully. Feb 9 19:53:48.011374 env[1235]: time="2024-02-09T19:53:48.011354322Z" level=info msg="CreateContainer within sandbox \"79c2ef7ecef19954751cb5cc635e2d7202e854353d428d2f14c51eab376eacc5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cb3f425caeb0b56f973d3613e3bd59d48c33b66eb691ba8297772a28ee1275c5\"" Feb 9 19:53:48.013133 env[1235]: time="2024-02-09T19:53:48.012660350Z" level=info msg="StartContainer for \"cb3f425caeb0b56f973d3613e3bd59d48c33b66eb691ba8297772a28ee1275c5\"" Feb 9 19:53:48.065529 env[1235]: time="2024-02-09T19:53:48.065496182Z" level=info msg="StartContainer for \"cb3f425caeb0b56f973d3613e3bd59d48c33b66eb691ba8297772a28ee1275c5\" returns successfully" Feb 9 19:53:48.576155 env[1235]: time="2024-02-09T19:53:48.576126848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:48.576957 env[1235]: time="2024-02-09T19:53:48.576940878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:48.577884 env[1235]: time="2024-02-09T19:53:48.577869124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:48.578837 env[1235]: time="2024-02-09T19:53:48.578823258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:53:48.579294 env[1235]: time="2024-02-09T19:53:48.579278266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:53:48.581807 env[1235]: time="2024-02-09T19:53:48.581790021Z" level=info msg="CreateContainer within sandbox \"7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:53:48.594912 env[1235]: time="2024-02-09T19:53:48.594890254Z" level=info msg="CreateContainer within sandbox \"7e7e3b719f54677af391c7e05cf4276f9bf089a6f621253be55d10e444f60c7c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"300563850b495d97baef9859c402c8b1bfce2f1143dd71a35b8514339736245d\"" Feb 9 19:53:48.595678 env[1235]: time="2024-02-09T19:53:48.595665318Z" level=info msg="StartContainer for \"300563850b495d97baef9859c402c8b1bfce2f1143dd71a35b8514339736245d\"" Feb 9 19:53:48.649149 env[1235]: time="2024-02-09T19:53:48.645288447Z" level=info msg="StartContainer for \"300563850b495d97baef9859c402c8b1bfce2f1143dd71a35b8514339736245d\" returns successfully" Feb 9 19:53:48.988269 kubelet[2290]: I0209 19:53:48.988200 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b69977687-pbx7x" podStartSLOduration=-9.223372030868843e+09 pod.CreationTimestamp="2024-02-09 19:53:43 +0000 UTC" firstStartedPulling="2024-02-09 19:53:44.910565639 +0000 UTC m=+68.315931428" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:48.984649837 +0000 UTC m=+72.390015633" watchObservedRunningTime="2024-02-09 19:53:48.985932479 +0000 UTC m=+72.391298266" Feb 9 19:53:48.989346 kubelet[2290]: I0209 19:53:48.988292 2290 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b69977687-gxsb8" podStartSLOduration=-9.223372030866508e+09 pod.CreationTimestamp="2024-02-09 19:53:43 +0000 UTC" firstStartedPulling="2024-02-09 19:53:44.924976122 +0000 UTC m=+68.330341910" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:53:48.978415294 +0000 UTC m=+72.383781088" watchObservedRunningTime="2024-02-09 19:53:48.988267995 +0000 UTC m=+72.393633791" Feb 9 19:53:49.005231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909453733.mount: Deactivated successfully. Feb 9 19:53:49.138944 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 9 19:53:49.139056 kernel: audit: type=1325 audit(1707508429.135:322): table=filter:130 family=2 entries=8 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.144046 kernel: audit: type=1300 audit(1707508429.135:322): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc123e47f0 a2=0 a3=7ffc123e47dc items=0 ppid=2445 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.135000 audit[4921]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.135000 audit[4921]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc123e47f0 a2=0 a3=7ffc123e47dc items=0 ppid=2445 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.149856 kernel: audit: type=1327 audit(1707508429.135:322): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.149000 audit[4921]: NETFILTER_CFG table=nat:131 family=2 entries=78 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.159644 kernel: audit: type=1325 audit(1707508429.149:323): table=nat:131 family=2 entries=78 op=nft_register_rule pid=4921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.159691 kernel: audit: type=1300 audit(1707508429.149:323): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc123e47f0 a2=0 a3=7ffc123e47dc items=0 ppid=2445 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.149000 audit[4921]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc123e47f0 a2=0 a3=7ffc123e47dc items=0 ppid=2445 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.164588 kernel: audit: type=1327 audit(1707508429.149:323): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.178000 audit[4947]: NETFILTER_CFG table=filter:132 family=2 entries=8 op=nft_register_rule pid=4947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.178000 audit[4947]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdb113aee0 a2=0 a3=7ffdb113aecc items=0 ppid=2445 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.186144 kernel: audit: type=1325 audit(1707508429.178:324): table=filter:132 family=2 entries=8 op=nft_register_rule pid=4947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.186176 kernel: audit: type=1300 audit(1707508429.178:324): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdb113aee0 a2=0 a3=7ffdb113aecc items=0 ppid=2445 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.186193 kernel: audit: type=1327 audit(1707508429.178:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.188000 audit[4947]: NETFILTER_CFG table=nat:133 family=2 entries=78 op=nft_register_rule pid=4947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:53:49.188000 audit[4947]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffdb113aee0 a2=0 a3=7ffdb113aecc items=0 ppid=2445 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:53:49.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:53:49.191593 kernel: audit: type=1325 audit(1707508429.188:325): table=nat:133 family=2 entries=78 op=nft_register_rule pid=4947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:11.318193 systemd[1]: run-containerd-runc-k8s.io-bb1ad0e714122f3de805f3d0eda0ed20c91dda15c859bb2c4c9eccfea3788e94-runc.CwPPSr.mount: Deactivated successfully. Feb 9 19:54:12.314920 systemd[1]: run-containerd-runc-k8s.io-874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296-runc.O047Me.mount: Deactivated successfully. Feb 9 19:54:13.600966 systemd[1]: Started sshd@7-139.178.70.110:22-139.178.89.65:35918.service. Feb 9 19:54:13.606873 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 19:54:13.606910 kernel: audit: type=1130 audit(1707508453.601:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.110:22-139.178.89.65:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:13.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.110:22-139.178.89.65:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:13.753000 audit[5013]: USER_ACCT pid=5013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.761359 kernel: audit: type=1101 audit(1707508453.753:327): pid=5013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.763614 kernel: audit: type=1103 audit(1707508453.757:328): pid=5013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.763641 kernel: audit: type=1006 audit(1707508453.761:329): pid=5013 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 19:54:13.767425 kernel: audit: type=1300 audit(1707508453.761:329): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff643cf2d0 a2=3 a3=0 items=0 ppid=1 pid=5013 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:13.757000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.769616 kernel: audit: type=1327 audit(1707508453.761:329): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:13.761000 audit[5013]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff643cf2d0 a2=3 a3=0 items=0 ppid=1 pid=5013 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:13.761000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:13.771242 sshd[5013]: Accepted publickey for core from 139.178.89.65 port 35918 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:13.770777 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:13.792641 systemd[1]: Started session-10.scope. Feb 9 19:54:13.793267 systemd-logind[1219]: New session 10 of user core. Feb 9 19:54:13.800801 kernel: audit: type=1105 audit(1707508453.796:330): pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.796000 audit[5013]: USER_START pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.796000 audit[5016]: CRED_ACQ pid=5016 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:13.804575 kernel: audit: type=1103 audit(1707508453.796:331): pid=5016 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:14.177935 sshd[5013]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:14.178000 audit[5013]: USER_END pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:14.178000 audit[5013]: CRED_DISP pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:14.183041 systemd-logind[1219]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:54:14.184068 systemd[1]: sshd@7-139.178.70.110:22-139.178.89.65:35918.service: Deactivated successfully. Feb 9 19:54:14.185521 kernel: audit: type=1106 audit(1707508454.178:332): pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:14.185976 kernel: audit: type=1104 audit(1707508454.178:333): pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:14.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.110:22-139.178.89.65:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:14.184743 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:54:14.186927 systemd-logind[1219]: Removed session 10. Feb 9 19:54:14.527297 systemd[1]: run-containerd-runc-k8s.io-300563850b495d97baef9859c402c8b1bfce2f1143dd71a35b8514339736245d-runc.xL75Cq.mount: Deactivated successfully. Feb 9 19:54:14.680000 audit[5091]: NETFILTER_CFG table=filter:134 family=2 entries=7 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:14.680000 audit[5091]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc7b8f1440 a2=0 a3=7ffc7b8f142c items=0 ppid=2445 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:14.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:14.682000 audit[5091]: NETFILTER_CFG table=nat:135 family=2 entries=89 op=nft_register_chain pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:14.682000 audit[5091]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffc7b8f1440 a2=0 a3=7ffc7b8f142c items=0 ppid=2445 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:14.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:19.182880 systemd[1]: Started sshd@8-139.178.70.110:22-139.178.89.65:47274.service. Feb 9 19:54:19.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.110:22-139.178.89.65:47274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:19.184357 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:54:19.184386 kernel: audit: type=1130 audit(1707508459.182:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.110:22-139.178.89.65:47274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:19.254000 audit[5093]: USER_ACCT pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.255887 sshd[5093]: Accepted publickey for core from 139.178.89.65 port 47274 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:19.258569 kernel: audit: type=1101 audit(1707508459.254:338): pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.258000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.264185 kernel: audit: type=1103 audit(1707508459.258:339): pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.264237 kernel: audit: type=1006 audit(1707508459.258:340): pid=5093 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:54:19.264480 sshd[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:19.258000 audit[5093]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff08017c00 a2=3 a3=0 items=0 ppid=1 pid=5093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:19.268563 kernel: audit: type=1300 audit(1707508459.258:340): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff08017c00 a2=3 a3=0 items=0 ppid=1 pid=5093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:19.269934 kernel: audit: type=1327 audit(1707508459.258:340): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:19.258000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:19.277409 systemd-logind[1219]: New session 11 of user core. Feb 9 19:54:19.277948 systemd[1]: Started session-11.scope. Feb 9 19:54:19.280000 audit[5093]: USER_START pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.285733 kernel: audit: type=1105 audit(1707508459.280:341): pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.284000 audit[5096]: CRED_ACQ pid=5096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.290556 kernel: audit: type=1103 audit(1707508459.284:342): pid=5096 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.612513 sshd[5093]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:19.612000 audit[5093]: USER_END pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.612000 audit[5093]: CRED_DISP pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.620364 kernel: audit: type=1106 audit(1707508459.612:343): pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.620645 kernel: audit: type=1104 audit(1707508459.612:344): pid=5093 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:19.619629 systemd[1]: sshd@8-139.178.70.110:22-139.178.89.65:47274.service: Deactivated successfully. Feb 9 19:54:19.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.110:22-139.178.89.65:47274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:19.620110 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:54:19.620643 systemd-logind[1219]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:54:19.621307 systemd-logind[1219]: Removed session 11. Feb 9 19:54:24.616094 systemd[1]: Started sshd@9-139.178.70.110:22-139.178.89.65:47288.service. Feb 9 19:54:24.622944 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:54:24.622989 kernel: audit: type=1130 audit(1707508464.616:346): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.110:22-139.178.89.65:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:24.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.110:22-139.178.89.65:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:24.883000 audit[5109]: USER_ACCT pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.885103 sshd[5109]: Accepted publickey for core from 139.178.89.65 port 47288 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:24.888000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.889750 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:24.893524 kernel: audit: type=1101 audit(1707508464.883:347): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.893586 kernel: audit: type=1103 audit(1707508464.888:348): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.888000 audit[5109]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9a370c10 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:24.900353 kernel: audit: type=1006 audit(1707508464.888:349): pid=5109 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 19:54:24.900444 kernel: audit: type=1300 audit(1707508464.888:349): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9a370c10 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:24.900468 kernel: audit: type=1327 audit(1707508464.888:349): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:24.888000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:24.904224 systemd-logind[1219]: New session 12 of user core. Feb 9 19:54:24.904622 systemd[1]: Started session-12.scope. Feb 9 19:54:24.907000 audit[5109]: USER_START pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.908000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.914719 kernel: audit: type=1105 audit(1707508464.907:350): pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:24.914758 kernel: audit: type=1103 audit(1707508464.908:351): pid=5114 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:25.015518 sshd[5109]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:25.016000 audit[5109]: USER_END pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:25.017615 systemd[1]: sshd@9-139.178.70.110:22-139.178.89.65:47288.service: Deactivated successfully. Feb 9 19:54:25.018101 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:54:25.020581 kernel: audit: type=1106 audit(1707508465.016:352): pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:25.020630 kernel: audit: type=1104 audit(1707508465.016:353): pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:25.016000 audit[5109]: CRED_DISP pid=5109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:25.023641 systemd-logind[1219]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:54:25.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.110:22-139.178.89.65:47288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:25.024155 systemd-logind[1219]: Removed session 12. Feb 9 19:54:30.018129 systemd[1]: Started sshd@10-139.178.70.110:22-139.178.89.65:33088.service. Feb 9 19:54:30.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.110:22-139.178.89.65:33088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:30.025687 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:54:30.025711 kernel: audit: type=1130 audit(1707508470.017:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.110:22-139.178.89.65:33088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:30.239000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.240332 sshd[5126]: Accepted publickey for core from 139.178.89.65 port 33088 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:30.243547 kernel: audit: type=1101 audit(1707508470.239:356): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.243000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.244368 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:30.249369 kernel: audit: type=1103 audit(1707508470.243:357): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.249427 kernel: audit: type=1006 audit(1707508470.243:358): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 9 19:54:30.256110 kernel: audit: type=1300 audit(1707508470.243:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5b32ed50 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:30.256138 kernel: audit: type=1327 audit(1707508470.243:358): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:30.256164 kernel: audit: type=1105 audit(1707508470.253:359): pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.243000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5b32ed50 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:30.243000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:30.253000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.261356 kernel: audit: type=1103 audit(1707508470.254:360): pid=5129 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.254000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.250345 systemd[1]: Started session-13.scope. Feb 9 19:54:30.251005 systemd-logind[1219]: New session 13 of user core. Feb 9 19:54:30.444326 sshd[5126]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:30.486000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.488094 systemd-logind[1219]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:54:30.499311 kernel: audit: type=1106 audit(1707508470.486:361): pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.507651 kernel: audit: type=1104 audit(1707508470.486:362): pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.486000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:30.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.110:22-139.178.89.65:33088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:30.489232 systemd[1]: sshd@10-139.178.70.110:22-139.178.89.65:33088.service: Deactivated successfully. Feb 9 19:54:30.489842 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:54:30.491070 systemd-logind[1219]: Removed session 13. Feb 9 19:54:35.431679 systemd[1]: Started sshd@11-139.178.70.110:22-139.178.89.65:33100.service. Feb 9 19:54:35.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.110:22-139.178.89.65:33100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:35.433001 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:54:35.433032 kernel: audit: type=1130 audit(1707508475.431:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.110:22-139.178.89.65:33100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:35.472000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.475000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.476934 sshd[5142]: Accepted publickey for core from 139.178.89.65 port 33100 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:35.476353 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:35.479078 kernel: audit: type=1101 audit(1707508475.472:365): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.479109 kernel: audit: type=1103 audit(1707508475.475:366): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.481009 kernel: audit: type=1006 audit(1707508475.475:367): pid=5142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 19:54:35.475000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccb1f6a50 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:35.484432 kernel: audit: type=1300 audit(1707508475.475:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccb1f6a50 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:35.475000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:35.486551 kernel: audit: type=1327 audit(1707508475.475:367): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:35.487183 systemd-logind[1219]: New session 14 of user core. Feb 9 19:54:35.487527 systemd[1]: Started session-14.scope. Feb 9 19:54:35.490000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.491000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.497584 kernel: audit: type=1105 audit(1707508475.490:368): pid=5142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.497881 kernel: audit: type=1103 audit(1707508475.491:369): pid=5145 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.110:22-139.178.89.65:33116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:35.660659 sshd[5142]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:35.660837 systemd[1]: Started sshd@12-139.178.70.110:22-139.178.89.65:33116.service. Feb 9 19:54:35.663000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.667951 kernel: audit: type=1130 audit(1707508475.660:370): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.110:22-139.178.89.65:33116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:35.667988 kernel: audit: type=1106 audit(1707508475.663:371): pid=5142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.668000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.110:22-139.178.89.65:33100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:35.669367 systemd-logind[1219]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:54:35.669483 systemd[1]: sshd@11-139.178.70.110:22-139.178.89.65:33100.service: Deactivated successfully. Feb 9 19:54:35.669990 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:54:35.670304 systemd-logind[1219]: Removed session 14. Feb 9 19:54:35.693000 audit[5153]: USER_ACCT pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.694000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.694000 audit[5153]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd56d99b20 a2=3 a3=0 items=0 ppid=1 pid=5153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:35.694000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:35.700000 audit[5153]: USER_START pid=5153 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.701000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:35.694952 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:35.713220 sshd[5153]: Accepted publickey for core from 139.178.89.65 port 33116 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:35.697107 systemd-logind[1219]: New session 15 of user core. Feb 9 19:54:35.697704 systemd[1]: Started session-15.scope. Feb 9 19:54:37.070438 systemd[1]: Started sshd@13-139.178.70.110:22-139.178.89.65:33122.service. Feb 9 19:54:37.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.110:22-139.178.89.65:33122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:37.089000 audit[5153]: USER_END pid=5153 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.090000 audit[5153]: CRED_DISP pid=5153 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.078742 sshd[5153]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:37.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.110:22-139.178.89.65:33116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:37.101110 systemd[1]: sshd@12-139.178.70.110:22-139.178.89.65:33116.service: Deactivated successfully. Feb 9 19:54:37.101677 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:54:37.102531 systemd-logind[1219]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:54:37.103391 systemd-logind[1219]: Removed session 15. Feb 9 19:54:37.172000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.174137 sshd[5170]: Accepted publickey for core from 139.178.89.65 port 33122 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:37.173000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.173000 audit[5170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc869e3450 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:37.173000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:37.179603 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:37.183183 systemd[1]: Started session-16.scope. Feb 9 19:54:37.183464 systemd-logind[1219]: New session 16 of user core. Feb 9 19:54:37.185000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.186000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.388040 sshd[5170]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:37.387000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.387000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:37.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.110:22-139.178.89.65:33122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:37.390178 systemd[1]: sshd@13-139.178.70.110:22-139.178.89.65:33122.service: Deactivated successfully. Feb 9 19:54:37.390938 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:54:37.391176 systemd-logind[1219]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:54:37.391673 systemd-logind[1219]: Removed session 16. Feb 9 19:54:41.753796 systemd[1]: run-containerd-runc-k8s.io-874e6eab0a4f4e8cc94cf2a43b8913d2a2d4935b47737d92040b6726e17d2296-runc.SkPLrO.mount: Deactivated successfully. Feb 9 19:54:42.391282 systemd[1]: Started sshd@14-139.178.70.110:22-139.178.89.65:47538.service. Feb 9 19:54:42.404777 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:54:42.431010 kernel: audit: type=1130 audit(1707508482.389:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.110:22-139.178.89.65:47538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:42.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.110:22-139.178.89.65:47538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:42.499000 audit[5233]: USER_ACCT pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.501586 sshd[5233]: Accepted publickey for core from 139.178.89.65 port 47538 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:42.503000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.507182 kernel: audit: type=1101 audit(1707508482.499:392): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.507243 kernel: audit: type=1103 audit(1707508482.503:393): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.507319 kernel: audit: type=1006 audit(1707508482.503:394): pid=5233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:54:42.509341 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:42.503000 audit[5233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc664b080 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:42.515553 kernel: audit: type=1300 audit(1707508482.503:394): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc664b080 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:42.515607 kernel: audit: type=1327 audit(1707508482.503:394): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:42.503000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:42.514480 systemd[1]: Started session-17.scope. Feb 9 19:54:42.515485 systemd-logind[1219]: New session 17 of user core. Feb 9 19:54:42.516000 audit[5233]: USER_START pid=5233 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.525326 kernel: audit: type=1105 audit(1707508482.516:395): pid=5233 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.525358 kernel: audit: type=1103 audit(1707508482.517:396): pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.517000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.688597 systemd[1]: Started sshd@15-139.178.70.110:22-139.178.89.65:47554.service. Feb 9 19:54:42.692686 kernel: audit: type=1130 audit(1707508482.687:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.110:22-139.178.89.65:47554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:42.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.110:22-139.178.89.65:47554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:42.690086 sshd[5233]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:42.693025 systemd-logind[1219]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:54:42.689000 audit[5233]: USER_END pid=5233 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.693964 systemd[1]: sshd@14-139.178.70.110:22-139.178.89.65:47538.service: Deactivated successfully. Feb 9 19:54:42.694482 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:54:42.695336 systemd-logind[1219]: Removed session 17. Feb 9 19:54:42.689000 audit[5233]: CRED_DISP pid=5233 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.110:22-139.178.89.65:47538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:42.697585 kernel: audit: type=1106 audit(1707508482.689:398): pid=5233 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.728000 audit[5244]: USER_ACCT pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.730463 sshd[5244]: Accepted publickey for core from 139.178.89.65 port 47554 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:42.729000 audit[5244]: CRED_ACQ pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.729000 audit[5244]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda5641cf0 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:42.729000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:42.731579 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:42.735462 systemd[1]: Started session-18.scope. Feb 9 19:54:42.735888 systemd-logind[1219]: New session 18 of user core. Feb 9 19:54:42.738000 audit[5244]: USER_START pid=5244 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:42.739000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:43.259151 systemd[1]: Started sshd@16-139.178.70.110:22-139.178.89.65:47562.service. Feb 9 19:54:43.260326 sshd[5244]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:43.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.110:22-139.178.89.65:47562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:43.260000 audit[5244]: USER_END pid=5244 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:43.261000 audit[5244]: CRED_DISP pid=5244 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:43.263599 systemd[1]: sshd@15-139.178.70.110:22-139.178.89.65:47554.service: Deactivated successfully. Feb 9 19:54:43.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.110:22-139.178.89.65:47554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:43.264588 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:54:43.264920 systemd-logind[1219]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:54:43.265496 systemd-logind[1219]: Removed session 18. Feb 9 19:54:43.322000 audit[5275]: USER_ACCT pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:43.324407 sshd[5275]: Accepted publickey for core from 139.178.89.65 port 47562 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:43.323000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:43.323000 audit[5275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe78d994f0 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:43.323000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:43.326810 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:43.330363 systemd[1]: Started session-19.scope. Feb 9 19:54:43.330774 systemd-logind[1219]: New session 19 of user core. Feb 9 19:54:43.332000 audit[5275]: USER_START pid=5275 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:43.333000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.712103 sshd[5275]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:44.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.110:22-139.178.89.65:47564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:44.733000 audit[5275]: USER_END pid=5275 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.737000 audit[5275]: CRED_DISP pid=5275 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.110:22-139.178.89.65:47562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:44.723130 systemd[1]: Started sshd@17-139.178.70.110:22-139.178.89.65:47564.service. Feb 9 19:54:44.740255 systemd[1]: sshd@16-139.178.70.110:22-139.178.89.65:47562.service: Deactivated successfully. Feb 9 19:54:44.741011 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:54:44.741225 systemd-logind[1219]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:54:44.744099 systemd-logind[1219]: Removed session 19. Feb 9 19:54:44.907501 sshd[5330]: Accepted publickey for core from 139.178.89.65 port 47564 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:44.905000 audit[5330]: USER_ACCT pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.906000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.906000 audit[5330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd149e9b40 a2=3 a3=0 items=0 ppid=1 pid=5330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:44.906000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:44.911924 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:44.918306 systemd[1]: Started session-20.scope. Feb 9 19:54:44.918428 systemd-logind[1219]: New session 20 of user core. Feb 9 19:54:44.920000 audit[5330]: USER_START pid=5330 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.921000 audit[5360]: CRED_ACQ pid=5360 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:44.970000 audit[5362]: NETFILTER_CFG table=filter:136 family=2 entries=18 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:44.970000 audit[5362]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fff062ba020 a2=0 a3=7fff062ba00c items=0 ppid=2445 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:44.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:44.972000 audit[5362]: NETFILTER_CFG table=nat:137 family=2 entries=94 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:44.972000 audit[5362]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fff062ba020 a2=0 a3=7fff062ba00c items=0 ppid=2445 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:44.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:45.022000 audit[5388]: NETFILTER_CFG table=filter:138 family=2 entries=30 op=nft_register_rule pid=5388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:45.022000 audit[5388]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fff2c3024e0 a2=0 a3=7fff2c3024cc items=0 ppid=2445 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:45.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:45.024000 audit[5388]: NETFILTER_CFG table=nat:139 family=2 entries=94 op=nft_register_rule pid=5388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:45.024000 audit[5388]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fff2c3024e0 a2=0 a3=7fff2c3024cc items=0 ppid=2445 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:45.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:45.605016 sshd[5330]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:45.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.110:22-139.178.89.65:47568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:45.633000 audit[5330]: USER_END pid=5330 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.634000 audit[5330]: CRED_DISP pid=5330 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.110:22-139.178.89.65:47564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:45.620693 systemd[1]: Started sshd@18-139.178.70.110:22-139.178.89.65:47568.service. Feb 9 19:54:45.641851 systemd[1]: sshd@17-139.178.70.110:22-139.178.89.65:47564.service: Deactivated successfully. Feb 9 19:54:45.642612 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:54:45.642855 systemd-logind[1219]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:54:45.643386 systemd-logind[1219]: Removed session 20. Feb 9 19:54:45.705000 audit[5393]: USER_ACCT pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.707442 sshd[5393]: Accepted publickey for core from 139.178.89.65 port 47568 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:45.706000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.706000 audit[5393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe231f02e0 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:45.706000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:45.709205 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:45.714466 systemd[1]: Started session-21.scope. Feb 9 19:54:45.714674 systemd-logind[1219]: New session 21 of user core. Feb 9 19:54:45.715000 audit[5393]: USER_START pid=5393 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.716000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.869585 sshd[5393]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:45.869000 audit[5393]: USER_END pid=5393 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.869000 audit[5393]: CRED_DISP pid=5393 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:45.871663 systemd[1]: sshd@18-139.178.70.110:22-139.178.89.65:47568.service: Deactivated successfully. Feb 9 19:54:45.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.110:22-139.178.89.65:47568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:45.872135 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:54:45.872368 systemd-logind[1219]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:54:45.872834 systemd-logind[1219]: Removed session 21. Feb 9 19:54:50.880606 systemd[1]: Started sshd@19-139.178.70.110:22-139.178.89.65:51540.service. Feb 9 19:54:50.891875 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 19:54:50.917402 kernel: audit: type=1130 audit(1707508490.879:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.110:22-139.178.89.65:51540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:50.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.110:22-139.178.89.65:51540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:51.211374 sshd[5408]: Accepted publickey for core from 139.178.89.65 port 51540 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:51.209000 audit[5408]: USER_ACCT pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.219676 kernel: audit: type=1101 audit(1707508491.209:441): pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.225007 kernel: audit: type=1103 audit(1707508491.210:442): pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.225035 kernel: audit: type=1006 audit(1707508491.210:443): pid=5408 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 9 19:54:51.228893 kernel: audit: type=1300 audit(1707508491.210:443): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9a99d140 a2=3 a3=0 items=0 ppid=1 pid=5408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:51.228919 kernel: audit: type=1327 audit(1707508491.210:443): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:51.210000 audit[5408]: CRED_ACQ pid=5408 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.210000 audit[5408]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9a99d140 a2=3 a3=0 items=0 ppid=1 pid=5408 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:51.210000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:51.228960 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:51.241906 systemd[1]: Started session-22.scope. Feb 9 19:54:51.242396 systemd-logind[1219]: New session 22 of user core. Feb 9 19:54:51.243000 audit[5408]: USER_START pid=5408 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.248000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.252721 kernel: audit: type=1105 audit(1707508491.243:444): pid=5408 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.252752 kernel: audit: type=1103 audit(1707508491.248:445): pid=5436 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.266000 audit[5437]: NETFILTER_CFG table=filter:140 family=2 entries=18 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:51.275259 kernel: audit: type=1325 audit(1707508491.266:446): table=filter:140 family=2 entries=18 op=nft_register_rule pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:51.275306 kernel: audit: type=1300 audit(1707508491.266:446): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff17e03630 a2=0 a3=7fff17e0361c items=0 ppid=2445 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:51.266000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff17e03630 a2=0 a3=7fff17e0361c items=0 ppid=2445 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:51.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:51.276000 audit[5437]: NETFILTER_CFG table=nat:141 family=2 entries=178 op=nft_register_chain pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:54:51.276000 audit[5437]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7fff17e03630 a2=0 a3=7fff17e0361c items=0 ppid=2445 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:51.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:54:51.602215 sshd[5408]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:51.601000 audit[5408]: USER_END pid=5408 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.601000 audit[5408]: CRED_DISP pid=5408 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:51.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.110:22-139.178.89.65:51540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:51.604644 systemd[1]: sshd@19-139.178.70.110:22-139.178.89.65:51540.service: Deactivated successfully. Feb 9 19:54:51.605687 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:54:51.605972 systemd-logind[1219]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:54:51.607005 systemd-logind[1219]: Removed session 22. Feb 9 19:54:56.604214 systemd[1]: Started sshd@20-139.178.70.110:22-139.178.89.65:51548.service. Feb 9 19:54:56.622361 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:54:56.622401 kernel: audit: type=1130 audit(1707508496.602:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.110:22-139.178.89.65:51548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:56.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.110:22-139.178.89.65:51548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:56.746000 audit[5451]: USER_ACCT pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.749390 sshd[5451]: Accepted publickey for core from 139.178.89.65 port 51548 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:54:56.752725 kernel: audit: type=1101 audit(1707508496.746:452): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.752765 kernel: audit: type=1103 audit(1707508496.751:453): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.751000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.755908 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:54:56.757811 kernel: audit: type=1006 audit(1707508496.751:454): pid=5451 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 19:54:56.761503 kernel: audit: type=1300 audit(1707508496.751:454): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc893101e0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:56.761547 kernel: audit: type=1327 audit(1707508496.751:454): proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:56.751000 audit[5451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc893101e0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:54:56.751000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:54:56.765175 systemd-logind[1219]: New session 23 of user core. Feb 9 19:54:56.765497 systemd[1]: Started session-23.scope. Feb 9 19:54:56.767000 audit[5451]: USER_START pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.767000 audit[5454]: CRED_ACQ pid=5454 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.776124 kernel: audit: type=1105 audit(1707508496.767:455): pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:56.776674 kernel: audit: type=1103 audit(1707508496.767:456): pid=5454 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:57.061767 sshd[5451]: pam_unix(sshd:session): session closed for user core Feb 9 19:54:57.073000 audit[5451]: USER_END pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:57.084693 kernel: audit: type=1106 audit(1707508497.073:457): pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:57.084743 kernel: audit: type=1104 audit(1707508497.074:458): pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:57.074000 audit[5451]: CRED_DISP pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:54:57.076721 systemd[1]: sshd@20-139.178.70.110:22-139.178.89.65:51548.service: Deactivated successfully. Feb 9 19:54:57.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.110:22-139.178.89.65:51548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:54:57.077353 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:54:57.081045 systemd-logind[1219]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:54:57.081691 systemd-logind[1219]: Removed session 23. Feb 9 19:55:02.062495 systemd[1]: Started sshd@21-139.178.70.110:22-139.178.89.65:54914.service. Feb 9 19:55:02.064994 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:55:02.068557 kernel: audit: type=1130 audit(1707508502.061:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.110:22-139.178.89.65:54914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:02.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.110:22-139.178.89.65:54914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:02.179000 audit[5474]: USER_ACCT pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.181749 sshd[5474]: Accepted publickey for core from 139.178.89.65 port 54914 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:55:02.183000 audit[5474]: CRED_ACQ pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.187639 kernel: audit: type=1101 audit(1707508502.179:461): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.187677 kernel: audit: type=1103 audit(1707508502.183:462): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.199056 kernel: audit: type=1006 audit(1707508502.183:463): pid=5474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 19:55:02.199083 kernel: audit: type=1300 audit(1707508502.183:463): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff27afb7b0 a2=3 a3=0 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:02.199115 kernel: audit: type=1327 audit(1707508502.183:463): proctitle=737368643A20636F7265205B707269765D Feb 9 19:55:02.183000 audit[5474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff27afb7b0 a2=3 a3=0 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:02.183000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:55:02.193080 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:55:02.208181 systemd-logind[1219]: New session 24 of user core. Feb 9 19:55:02.208425 systemd[1]: Started session-24.scope. Feb 9 19:55:02.210000 audit[5474]: USER_START pid=5474 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.214000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.226451 kernel: audit: type=1105 audit(1707508502.210:464): pid=5474 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.226502 kernel: audit: type=1103 audit(1707508502.214:465): pid=5477 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.388480 sshd[5474]: pam_unix(sshd:session): session closed for user core Feb 9 19:55:02.387000 audit[5474]: USER_END pid=5474 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.389000 audit[5474]: CRED_DISP pid=5474 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.396829 kernel: audit: type=1106 audit(1707508502.387:466): pid=5474 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.403352 kernel: audit: type=1104 audit(1707508502.389:467): pid=5474 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Feb 9 19:55:02.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.110:22-139.178.89.65:54914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:02.394070 systemd-logind[1219]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:55:02.394933 systemd[1]: sshd@21-139.178.70.110:22-139.178.89.65:54914.service: Deactivated successfully. Feb 9 19:55:02.395459 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:55:02.396343 systemd-logind[1219]: Removed session 24.