Feb 9 02:51:18.654155 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 02:51:18.654170 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 02:51:18.654176 kernel: Disabled fast string operations Feb 9 02:51:18.654180 kernel: BIOS-provided physical RAM map: Feb 9 02:51:18.654183 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 9 02:51:18.654187 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 9 02:51:18.654193 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 9 02:51:18.654197 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 9 02:51:18.654201 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 9 02:51:18.654205 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 9 02:51:18.654209 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 9 02:51:18.654212 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 9 02:51:18.654216 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 9 02:51:18.654220 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 02:51:18.654226 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 9 02:51:18.654230 kernel: NX (Execute Disable) protection: active Feb 9 02:51:18.654235 kernel: SMBIOS 2.7 present. Feb 9 02:51:18.654239 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 9 02:51:18.654244 kernel: vmware: hypercall mode: 0x00 Feb 9 02:51:18.654248 kernel: Hypervisor detected: VMware Feb 9 02:51:18.654253 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 9 02:51:18.654257 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 9 02:51:18.654261 kernel: vmware: using clock offset of 5245541301 ns Feb 9 02:51:18.654266 kernel: tsc: Detected 3408.000 MHz processor Feb 9 02:51:18.654271 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 02:51:18.654275 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 02:51:18.654280 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 9 02:51:18.654284 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 02:51:18.654289 kernel: total RAM covered: 3072M Feb 9 02:51:18.654294 kernel: Found optimal setting for mtrr clean up Feb 9 02:51:18.654299 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 9 02:51:18.654303 kernel: Using GB pages for direct mapping Feb 9 02:51:18.654308 kernel: ACPI: Early table checksum verification disabled Feb 9 02:51:18.654312 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 9 02:51:18.654316 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 9 02:51:18.654321 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 9 02:51:18.654325 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 9 02:51:18.654329 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 02:51:18.654334 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 02:51:18.654339 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 9 02:51:18.654345 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 9 02:51:18.654350 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 9 02:51:18.654355 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 9 02:51:18.654360 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 9 02:51:18.654365 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 9 02:51:18.654370 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 9 02:51:18.654375 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 9 02:51:18.654379 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 02:51:18.654384 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 02:51:18.654389 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 9 02:51:18.654394 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 9 02:51:18.654398 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 9 02:51:18.654403 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 9 02:51:18.654408 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 9 02:51:18.654413 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 9 02:51:18.654418 kernel: system APIC only can use physical flat Feb 9 02:51:18.654422 kernel: Setting APIC routing to physical flat. Feb 9 02:51:18.654427 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 02:51:18.654432 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 9 02:51:18.654437 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 9 02:51:18.654441 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 9 02:51:18.654446 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 9 02:51:18.654452 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 9 02:51:18.654456 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 9 02:51:18.654461 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 9 02:51:18.654465 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 9 02:51:18.654470 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 9 02:51:18.654475 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 9 02:51:18.654479 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 9 02:51:18.654484 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 9 02:51:18.654489 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 9 02:51:18.654493 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 9 02:51:18.654499 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 9 02:51:18.654503 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 9 02:51:18.654508 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 9 02:51:18.654513 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 9 02:51:18.654518 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 9 02:51:18.654522 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 9 02:51:18.654573 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 9 02:51:18.654580 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 9 02:51:18.654585 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 9 02:51:18.654590 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 9 02:51:18.654596 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 9 02:51:18.654601 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 9 02:51:18.654606 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 9 02:51:18.654610 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 9 02:51:18.654615 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 9 02:51:18.654620 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 9 02:51:18.654624 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 9 02:51:18.654629 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 9 02:51:18.654634 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 9 02:51:18.654638 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 9 02:51:18.654644 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 9 02:51:18.654649 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 9 02:51:18.654653 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 9 02:51:18.654658 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 9 02:51:18.654663 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 9 02:51:18.654667 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 9 02:51:18.654672 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 9 02:51:18.654677 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 9 02:51:18.654681 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 9 02:51:18.654686 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 9 02:51:18.654692 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 9 02:51:18.654696 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 9 02:51:18.654701 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 9 02:51:18.654706 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 9 02:51:18.654710 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 9 02:51:18.654715 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 9 02:51:18.654720 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 9 02:51:18.654724 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 9 02:51:18.654729 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 9 02:51:18.654734 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 9 02:51:18.654739 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 9 02:51:18.654744 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 9 02:51:18.654749 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 9 02:51:18.654753 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 9 02:51:18.654758 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 9 02:51:18.654763 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 9 02:51:18.654772 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 9 02:51:18.654777 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 9 02:51:18.654782 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 9 02:51:18.654787 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 9 02:51:18.654792 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 9 02:51:18.654798 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 9 02:51:18.654803 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 9 02:51:18.654808 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 9 02:51:18.654813 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 9 02:51:18.654818 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 9 02:51:18.654823 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 9 02:51:18.654827 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 9 02:51:18.654834 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 9 02:51:18.654839 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 9 02:51:18.654844 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 9 02:51:18.654849 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 9 02:51:18.654854 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 9 02:51:18.654858 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 9 02:51:18.654863 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 9 02:51:18.654868 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 9 02:51:18.654873 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 9 02:51:18.654879 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 9 02:51:18.654884 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 9 02:51:18.654889 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 9 02:51:18.654894 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 9 02:51:18.654899 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 9 02:51:18.654904 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 9 02:51:18.654909 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 9 02:51:18.654914 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 9 02:51:18.654919 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 9 02:51:18.654924 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 9 02:51:18.654930 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 9 02:51:18.654935 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 9 02:51:18.654940 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 9 02:51:18.654945 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 9 02:51:18.654950 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 9 02:51:18.654955 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 9 02:51:18.654960 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 9 02:51:18.654965 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 9 02:51:18.654970 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 9 02:51:18.654975 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 9 02:51:18.654981 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 9 02:51:18.654986 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 9 02:51:18.654995 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 9 02:51:18.655000 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 9 02:51:18.655005 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 9 02:51:18.655010 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 9 02:51:18.655015 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 9 02:51:18.655019 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 9 02:51:18.655024 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 9 02:51:18.655029 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 9 02:51:18.655036 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 9 02:51:18.655041 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 9 02:51:18.655046 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 9 02:51:18.655051 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 9 02:51:18.655056 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 9 02:51:18.655061 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 9 02:51:18.655066 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 9 02:51:18.655070 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 9 02:51:18.655075 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 9 02:51:18.655080 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 9 02:51:18.655086 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 9 02:51:18.655091 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 9 02:51:18.655096 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 9 02:51:18.655101 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 9 02:51:18.655106 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 9 02:51:18.655111 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 9 02:51:18.655116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 02:51:18.655122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 9 02:51:18.655127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 9 02:51:18.655133 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 9 02:51:18.655139 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 9 02:51:18.655144 kernel: Zone ranges: Feb 9 02:51:18.655149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 02:51:18.655154 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 9 02:51:18.655159 kernel: Normal empty Feb 9 02:51:18.655164 kernel: Movable zone start for each node Feb 9 02:51:18.655170 kernel: Early memory node ranges Feb 9 02:51:18.655175 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 9 02:51:18.655180 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 9 02:51:18.655186 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 9 02:51:18.655191 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 9 02:51:18.655196 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 02:51:18.655201 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 9 02:51:18.655206 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 9 02:51:18.655211 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 9 02:51:18.655216 kernel: system APIC only can use physical flat Feb 9 02:51:18.655221 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 9 02:51:18.655227 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 02:51:18.655233 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 02:51:18.655238 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 02:51:18.655243 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 02:51:18.655248 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 02:51:18.655253 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 02:51:18.655258 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 02:51:18.655263 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 02:51:18.655268 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 02:51:18.655273 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 02:51:18.655278 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 02:51:18.655284 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 02:51:18.655289 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 02:51:18.655294 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 02:51:18.655299 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 02:51:18.655304 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 02:51:18.655309 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 9 02:51:18.655314 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 9 02:51:18.655319 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 9 02:51:18.655324 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 9 02:51:18.655330 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 9 02:51:18.655335 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 9 02:51:18.655340 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 9 02:51:18.655345 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 9 02:51:18.655350 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 9 02:51:18.655355 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 9 02:51:18.655360 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 9 02:51:18.655365 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 9 02:51:18.655370 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 9 02:51:18.655375 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 9 02:51:18.655381 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 9 02:51:18.655387 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 9 02:51:18.655391 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 9 02:51:18.655396 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 9 02:51:18.655402 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 9 02:51:18.655407 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 9 02:51:18.655412 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 9 02:51:18.655417 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 9 02:51:18.655422 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 9 02:51:18.655428 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 9 02:51:18.655433 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 9 02:51:18.655438 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 9 02:51:18.655443 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 9 02:51:18.655448 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 9 02:51:18.655453 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 9 02:51:18.655458 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 9 02:51:18.655463 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 9 02:51:18.655468 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 9 02:51:18.655473 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 9 02:51:18.655479 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 9 02:51:18.655484 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 9 02:51:18.655489 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 9 02:51:18.655494 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 9 02:51:18.655500 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 9 02:51:18.655505 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 9 02:51:18.655510 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 9 02:51:18.655515 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 9 02:51:18.655520 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 9 02:51:18.655526 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 9 02:51:18.655538 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 9 02:51:18.655543 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 9 02:51:18.655548 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 9 02:51:18.655553 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 9 02:51:18.655559 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 9 02:51:18.655563 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 9 02:51:18.655568 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 9 02:51:18.655573 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 9 02:51:18.655580 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 9 02:51:18.655585 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 9 02:51:18.655590 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 9 02:51:18.655595 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 9 02:51:18.655600 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 9 02:51:18.655605 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 9 02:51:18.655610 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 9 02:51:18.655615 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 9 02:51:18.655620 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 9 02:51:18.655625 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 9 02:51:18.655631 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 9 02:51:18.655636 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 9 02:51:18.655641 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 9 02:51:18.655646 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 9 02:51:18.655651 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 9 02:51:18.655656 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 9 02:51:18.655661 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 9 02:51:18.655666 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 9 02:51:18.655671 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 9 02:51:18.655677 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 9 02:51:18.655682 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 9 02:51:18.655687 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 9 02:51:18.655692 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 9 02:51:18.655697 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 9 02:51:18.655702 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 9 02:51:18.655707 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 9 02:51:18.655712 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 9 02:51:18.655717 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 9 02:51:18.655722 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 9 02:51:18.655728 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 9 02:51:18.655733 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 9 02:51:18.655738 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 9 02:51:18.655743 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 9 02:51:18.655748 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 9 02:51:18.655753 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 9 02:51:18.655758 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 9 02:51:18.655764 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 9 02:51:18.655768 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 9 02:51:18.655774 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 9 02:51:18.655779 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 9 02:51:18.655784 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 9 02:51:18.655789 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 9 02:51:18.655794 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 9 02:51:18.655799 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 9 02:51:18.655804 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 9 02:51:18.655809 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 9 02:51:18.655814 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 9 02:51:18.655819 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 9 02:51:18.655825 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 9 02:51:18.655830 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 9 02:51:18.655835 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 9 02:51:18.655840 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 9 02:51:18.655845 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 9 02:51:18.655850 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 9 02:51:18.655856 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 9 02:51:18.655860 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 9 02:51:18.655866 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 9 02:51:18.655872 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 9 02:51:18.655877 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 9 02:51:18.655882 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 9 02:51:18.655887 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 9 02:51:18.655892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 9 02:51:18.655897 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 02:51:18.655902 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 9 02:51:18.655907 kernel: TSC deadline timer available Feb 9 02:51:18.655913 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 9 02:51:18.655919 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 9 02:51:18.655924 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 9 02:51:18.655929 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 02:51:18.655934 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Feb 9 02:51:18.655940 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 02:51:18.655945 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 02:51:18.655950 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 9 02:51:18.655955 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 9 02:51:18.655960 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 9 02:51:18.655966 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 9 02:51:18.655971 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 9 02:51:18.655976 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 9 02:51:18.655981 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 9 02:51:18.656019 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 9 02:51:18.656026 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 9 02:51:18.656032 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 9 02:51:18.656037 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 9 02:51:18.656042 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 9 02:51:18.656049 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 9 02:51:18.656054 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 9 02:51:18.656060 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 9 02:51:18.656065 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 9 02:51:18.656070 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 9 02:51:18.656076 kernel: Policy zone: DMA32 Feb 9 02:51:18.656082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 02:51:18.656088 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 02:51:18.656096 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 9 02:51:18.656102 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 9 02:51:18.656107 kernel: printk: log_buf_len min size: 262144 bytes Feb 9 02:51:18.656113 kernel: printk: log_buf_len: 1048576 bytes Feb 9 02:51:18.656118 kernel: printk: early log buf free: 239728(91%) Feb 9 02:51:18.656124 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 02:51:18.656130 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 02:51:18.656136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 02:51:18.656141 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 153416K reserved, 0K cma-reserved) Feb 9 02:51:18.656148 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 9 02:51:18.656153 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 02:51:18.656159 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 02:51:18.656166 kernel: rcu: Hierarchical RCU implementation. Feb 9 02:51:18.656172 kernel: rcu: RCU event tracing is enabled. Feb 9 02:51:18.656177 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 9 02:51:18.656184 kernel: Rude variant of Tasks RCU enabled. Feb 9 02:51:18.656189 kernel: Tracing variant of Tasks RCU enabled. Feb 9 02:51:18.656195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 02:51:18.656201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 9 02:51:18.656206 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 9 02:51:18.656212 kernel: random: crng init done Feb 9 02:51:18.656217 kernel: Console: colour VGA+ 80x25 Feb 9 02:51:18.656223 kernel: printk: console [tty0] enabled Feb 9 02:51:18.656228 kernel: printk: console [ttyS0] enabled Feb 9 02:51:18.656235 kernel: ACPI: Core revision 20210730 Feb 9 02:51:18.656241 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 9 02:51:18.656246 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 02:51:18.656252 kernel: x2apic enabled Feb 9 02:51:18.656258 kernel: Switched APIC routing to physical x2apic. Feb 9 02:51:18.656263 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 02:51:18.656269 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 02:51:18.656275 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 9 02:51:18.656280 kernel: Disabled fast string operations Feb 9 02:51:18.656287 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 02:51:18.656293 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 02:51:18.656298 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 02:51:18.656304 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 02:51:18.656310 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 02:51:18.656315 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 02:51:18.656321 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 02:51:18.656327 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 02:51:18.656332 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 02:51:18.656339 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 02:51:18.656345 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 02:51:18.656350 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 9 02:51:18.656356 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 02:51:18.656362 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 02:51:18.656367 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 02:51:18.656373 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 02:51:18.656379 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 02:51:18.656384 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 9 02:51:18.656391 kernel: Freeing SMP alternatives memory: 32K Feb 9 02:51:18.656396 kernel: pid_max: default: 131072 minimum: 1024 Feb 9 02:51:18.656402 kernel: LSM: Security Framework initializing Feb 9 02:51:18.656408 kernel: SELinux: Initializing. Feb 9 02:51:18.656413 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 02:51:18.656419 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 02:51:18.656425 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 02:51:18.656430 kernel: Performance Events: Skylake events, core PMU driver. Feb 9 02:51:18.656437 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 9 02:51:18.656443 kernel: core: CPUID marked event: 'instructions' unavailable Feb 9 02:51:18.656448 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 9 02:51:18.656453 kernel: core: CPUID marked event: 'cache references' unavailable Feb 9 02:51:18.656459 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 9 02:51:18.663817 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 9 02:51:18.663829 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 9 02:51:18.663842 kernel: ... version: 1 Feb 9 02:51:18.663851 kernel: ... bit width: 48 Feb 9 02:51:18.663858 kernel: ... generic registers: 4 Feb 9 02:51:18.663867 kernel: ... value mask: 0000ffffffffffff Feb 9 02:51:18.663876 kernel: ... max period: 000000007fffffff Feb 9 02:51:18.663881 kernel: ... fixed-purpose events: 0 Feb 9 02:51:18.663887 kernel: ... event mask: 000000000000000f Feb 9 02:51:18.663893 kernel: signal: max sigframe size: 1776 Feb 9 02:51:18.663899 kernel: rcu: Hierarchical SRCU implementation. Feb 9 02:51:18.663910 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 02:51:18.663915 kernel: smp: Bringing up secondary CPUs ... Feb 9 02:51:18.663923 kernel: x86: Booting SMP configuration: Feb 9 02:51:18.663929 kernel: .... node #0, CPUs: #1 Feb 9 02:51:18.663941 kernel: Disabled fast string operations Feb 9 02:51:18.663947 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 9 02:51:18.663953 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 9 02:51:18.663959 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 02:51:18.663972 kernel: smpboot: Max logical packages: 128 Feb 9 02:51:18.663978 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 9 02:51:18.663984 kernel: devtmpfs: initialized Feb 9 02:51:18.663989 kernel: x86/mm: Memory block size: 128MB Feb 9 02:51:18.663997 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 9 02:51:18.664003 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 02:51:18.664008 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 9 02:51:18.664021 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 02:51:18.664027 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 02:51:18.664033 kernel: audit: initializing netlink subsys (disabled) Feb 9 02:51:18.664039 kernel: audit: type=2000 audit(1707447077.057:1): state=initialized audit_enabled=0 res=1 Feb 9 02:51:18.664045 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 02:51:18.664051 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 02:51:18.664063 kernel: cpuidle: using governor menu Feb 9 02:51:18.664069 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 9 02:51:18.664075 kernel: ACPI: bus type PCI registered Feb 9 02:51:18.664080 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 02:51:18.664086 kernel: dca service started, version 1.12.1 Feb 9 02:51:18.664098 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 9 02:51:18.664105 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Feb 9 02:51:18.664110 kernel: PCI: Using configuration type 1 for base access Feb 9 02:51:18.664117 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 02:51:18.664124 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 02:51:18.664130 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 02:51:18.664138 kernel: ACPI: Added _OSI(Module Device) Feb 9 02:51:18.664148 kernel: ACPI: Added _OSI(Processor Device) Feb 9 02:51:18.664154 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 02:51:18.664160 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 02:51:18.664165 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 02:51:18.664174 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 02:51:18.664184 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 02:51:18.664191 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 02:51:18.664197 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 02:51:18.664203 kernel: ACPI: Interpreter enabled Feb 9 02:51:18.664213 kernel: ACPI: PM: (supports S0 S1 S5) Feb 9 02:51:18.664219 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 02:51:18.664225 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 02:51:18.664231 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 9 02:51:18.664237 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 9 02:51:18.664327 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 02:51:18.664379 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 9 02:51:18.664423 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 9 02:51:18.664431 kernel: PCI host bridge to bus 0000:00 Feb 9 02:51:18.664479 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 02:51:18.664520 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Feb 9 02:51:18.664610 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Feb 9 02:51:18.664653 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Feb 9 02:51:18.664692 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Feb 9 02:51:18.664730 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 9 02:51:18.664768 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 02:51:18.664806 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 9 02:51:18.664845 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 9 02:51:18.664896 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 9 02:51:18.664949 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 9 02:51:18.664998 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 9 02:51:18.665047 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 9 02:51:18.665092 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 9 02:51:18.665137 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 02:51:18.665182 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 02:51:18.665228 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 02:51:18.665271 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 02:51:18.665320 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 9 02:51:18.665364 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 9 02:51:18.665409 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 9 02:51:18.665460 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 9 02:51:18.665506 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 9 02:51:18.668623 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 9 02:51:18.668684 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 9 02:51:18.668733 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 9 02:51:18.668779 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 9 02:51:18.668824 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 9 02:51:18.668868 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 9 02:51:18.668911 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 02:51:18.668963 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 9 02:51:18.669018 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669065 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669113 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669158 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669205 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669251 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669300 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669344 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669393 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669438 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669485 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669561 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669616 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669662 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669709 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669755 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669803 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669851 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669900 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.669945 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.669993 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670039 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670086 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670134 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670182 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670226 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670275 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670320 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670370 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670417 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670467 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670511 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670573 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670620 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670669 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670717 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670763 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670808 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670855 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670901 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.670949 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.670995 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671043 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671088 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671139 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671184 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671233 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671279 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671329 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671374 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671421 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671466 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671513 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671571 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671623 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671668 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671716 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671761 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671809 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671854 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.671905 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.671951 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.672005 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 9 02:51:18.672051 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.672098 kernel: pci_bus 0000:01: extended config space not accessible Feb 9 02:51:18.672144 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 02:51:18.672193 kernel: pci_bus 0000:02: extended config space not accessible Feb 9 02:51:18.672202 kernel: acpiphp: Slot [32] registered Feb 9 02:51:18.672208 kernel: acpiphp: Slot [33] registered Feb 9 02:51:18.672214 kernel: acpiphp: Slot [34] registered Feb 9 02:51:18.672220 kernel: acpiphp: Slot [35] registered Feb 9 02:51:18.672226 kernel: acpiphp: Slot [36] registered Feb 9 02:51:18.672231 kernel: acpiphp: Slot [37] registered Feb 9 02:51:18.672237 kernel: acpiphp: Slot [38] registered Feb 9 02:51:18.672242 kernel: acpiphp: Slot [39] registered Feb 9 02:51:18.672249 kernel: acpiphp: Slot [40] registered Feb 9 02:51:18.672255 kernel: acpiphp: Slot [41] registered Feb 9 02:51:18.672261 kernel: acpiphp: Slot [42] registered Feb 9 02:51:18.672266 kernel: acpiphp: Slot [43] registered Feb 9 02:51:18.672272 kernel: acpiphp: Slot [44] registered Feb 9 02:51:18.672278 kernel: acpiphp: Slot [45] registered Feb 9 02:51:18.672283 kernel: acpiphp: Slot [46] registered Feb 9 02:51:18.672289 kernel: acpiphp: Slot [47] registered Feb 9 02:51:18.672295 kernel: acpiphp: Slot [48] registered Feb 9 02:51:18.672300 kernel: acpiphp: Slot [49] registered Feb 9 02:51:18.672307 kernel: acpiphp: Slot [50] registered Feb 9 02:51:18.672313 kernel: acpiphp: Slot [51] registered Feb 9 02:51:18.672318 kernel: acpiphp: Slot [52] registered Feb 9 02:51:18.672324 kernel: acpiphp: Slot [53] registered Feb 9 02:51:18.672329 kernel: acpiphp: Slot [54] registered Feb 9 02:51:18.672335 kernel: acpiphp: Slot [55] registered Feb 9 02:51:18.672340 kernel: acpiphp: Slot [56] registered Feb 9 02:51:18.672346 kernel: acpiphp: Slot [57] registered Feb 9 02:51:18.672352 kernel: acpiphp: Slot [58] registered Feb 9 02:51:18.672358 kernel: acpiphp: Slot [59] registered Feb 9 02:51:18.672364 kernel: acpiphp: Slot [60] registered Feb 9 02:51:18.672370 kernel: acpiphp: Slot [61] registered Feb 9 02:51:18.672375 kernel: acpiphp: Slot [62] registered Feb 9 02:51:18.672381 kernel: acpiphp: Slot [63] registered Feb 9 02:51:18.672425 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 9 02:51:18.672470 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 02:51:18.672514 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 02:51:18.672565 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 02:51:18.672612 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 9 02:51:18.672657 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Feb 9 02:51:18.672700 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Feb 9 02:51:18.672745 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Feb 9 02:51:18.672789 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Feb 9 02:51:18.672834 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 9 02:51:18.672878 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 9 02:51:18.672925 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 9 02:51:18.672975 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 9 02:51:18.673023 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 9 02:51:18.680613 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 9 02:51:18.680673 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 02:51:18.680721 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 02:51:18.680767 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 02:51:18.680819 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 02:51:18.680864 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 02:51:18.680908 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 02:51:18.680954 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 02:51:18.680998 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 02:51:18.681042 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 02:51:18.681086 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 02:51:18.681130 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 02:51:18.681177 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 02:51:18.681220 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 02:51:18.681264 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 02:51:18.681308 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 02:51:18.681351 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 02:51:18.681395 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 02:51:18.681459 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 02:51:18.681503 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 02:51:18.688557 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 02:51:18.688634 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 02:51:18.688682 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 02:51:18.688728 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 02:51:18.688778 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 02:51:18.688824 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 02:51:18.688867 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 02:51:18.688913 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 02:51:18.688956 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 02:51:18.689001 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 02:51:18.689052 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 9 02:51:18.689099 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 9 02:51:18.689147 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 9 02:51:18.689210 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 9 02:51:18.689264 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 9 02:51:18.689310 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 02:51:18.689356 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 9 02:51:18.689401 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 02:51:18.689447 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 02:51:18.689495 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 02:51:18.689574 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 02:51:18.689621 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 02:51:18.689668 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 02:51:18.689713 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 02:51:18.689758 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 02:51:18.689803 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 02:51:18.689848 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 02:51:18.689896 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 02:51:18.689940 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 02:51:18.689984 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 02:51:18.690030 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 02:51:18.690075 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 02:51:18.690120 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 02:51:18.690165 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 02:51:18.690209 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 02:51:18.690255 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 02:51:18.690302 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 02:51:18.690346 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 02:51:18.690390 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 02:51:18.690436 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 02:51:18.690481 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 02:51:18.690524 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 02:51:18.690579 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 02:51:18.690626 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 02:51:18.690672 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 02:51:18.690718 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 02:51:18.690763 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 02:51:18.690807 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 02:51:18.690850 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 02:51:18.690895 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 02:51:18.690941 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 02:51:18.690985 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 02:51:18.691034 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 02:51:18.691080 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 02:51:18.691124 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 02:51:18.691168 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 02:51:18.691212 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 02:51:18.691257 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 02:51:18.691303 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 02:51:18.691347 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 02:51:18.691392 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 02:51:18.691436 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 02:51:18.691481 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 02:51:18.691525 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 02:51:18.691583 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 02:51:18.691629 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 02:51:18.691676 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 02:51:18.691721 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 02:51:18.691766 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 02:51:18.691811 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 02:51:18.691855 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 02:51:18.691900 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 02:51:18.691948 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 02:51:18.691992 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 02:51:18.692038 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 02:51:18.692082 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 02:51:18.692128 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 02:51:18.692172 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 02:51:18.692216 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 02:51:18.692259 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 02:51:18.692305 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 02:51:18.692351 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 02:51:18.692395 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 02:51:18.692440 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 02:51:18.692484 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 02:51:18.692535 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 02:51:18.692588 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 02:51:18.692632 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 02:51:18.692676 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 02:51:18.692725 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 02:51:18.692769 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 02:51:18.692814 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 02:51:18.692920 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 02:51:18.692972 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 02:51:18.693017 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 02:51:18.693063 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 02:51:18.693108 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 02:51:18.693151 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 02:51:18.693161 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 9 02:51:18.693167 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 9 02:51:18.693173 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 9 02:51:18.693179 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 02:51:18.693185 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 9 02:51:18.693191 kernel: iommu: Default domain type: Translated Feb 9 02:51:18.693196 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 02:51:18.693241 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 9 02:51:18.693287 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 02:51:18.693331 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 9 02:51:18.693340 kernel: vgaarb: loaded Feb 9 02:51:18.693345 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 02:51:18.693352 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 02:51:18.693357 kernel: PTP clock support registered Feb 9 02:51:18.693363 kernel: PCI: Using ACPI for IRQ routing Feb 9 02:51:18.693369 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 02:51:18.693375 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 9 02:51:18.693382 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 9 02:51:18.693387 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 9 02:51:18.693393 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 9 02:51:18.693399 kernel: clocksource: Switched to clocksource tsc-early Feb 9 02:51:18.693404 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 02:51:18.693410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 02:51:18.693416 kernel: pnp: PnP ACPI init Feb 9 02:51:18.693464 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 9 02:51:18.693506 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 9 02:51:18.693556 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 9 02:51:18.693602 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 9 02:51:18.693645 kernel: pnp 00:06: [dma 2] Feb 9 02:51:18.693689 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 9 02:51:18.693729 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 9 02:51:18.693769 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 9 02:51:18.693779 kernel: pnp: PnP ACPI: found 8 devices Feb 9 02:51:18.693785 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 02:51:18.693791 kernel: NET: Registered PF_INET protocol family Feb 9 02:51:18.693797 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 02:51:18.693803 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 02:51:18.693809 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 02:51:18.693815 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 02:51:18.693821 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 02:51:18.693827 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 02:51:18.693833 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 02:51:18.693839 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 02:51:18.693845 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 02:51:18.693851 kernel: NET: Registered PF_XDP protocol family Feb 9 02:51:18.693898 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 9 02:51:18.693944 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 9 02:51:18.693994 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 9 02:51:18.694044 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 9 02:51:18.694089 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 9 02:51:18.694134 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 9 02:51:18.694178 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 9 02:51:18.694225 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 9 02:51:18.694269 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 9 02:51:18.694315 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 9 02:51:18.694361 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 9 02:51:18.694406 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 9 02:51:18.694451 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 9 02:51:18.694495 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 9 02:51:18.694554 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 9 02:51:18.694603 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 9 02:51:18.694649 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 9 02:51:18.694694 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 9 02:51:18.694739 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 9 02:51:18.694794 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 9 02:51:18.694840 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 9 02:51:18.694885 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 9 02:51:18.694931 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 9 02:51:18.694975 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 02:51:18.695020 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 02:51:18.695064 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695109 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695155 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695199 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695242 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695286 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695329 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695374 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695417 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695461 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695507 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695627 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695672 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695716 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695761 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695804 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695848 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695892 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.695938 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.695982 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696062 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696110 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696155 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696198 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696242 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696286 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696330 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696376 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696420 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696463 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696508 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696679 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696726 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696770 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696815 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696861 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696906 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.696949 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.696994 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697038 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697081 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697125 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697168 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697214 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697266 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697323 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697368 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697412 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697455 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697498 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697550 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697594 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697641 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697685 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697729 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697773 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697816 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697860 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697903 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.697947 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.697991 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698035 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698080 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698125 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698168 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698213 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698256 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698300 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698343 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698387 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698431 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698477 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698521 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698573 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698617 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698660 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698704 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698747 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698792 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698835 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698879 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.698925 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.698969 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 02:51:18.699017 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 02:51:18.699062 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 02:51:18.699108 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 9 02:51:18.699153 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 02:51:18.699197 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 02:51:18.699240 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 02:51:18.699291 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 9 02:51:18.699337 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 02:51:18.699380 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 02:51:18.699425 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 02:51:18.699468 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 02:51:18.699515 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 02:51:18.699571 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 02:51:18.699615 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 02:51:18.699660 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 02:51:18.699708 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 02:51:18.699753 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 02:51:18.699796 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 02:51:18.699841 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 02:51:18.699884 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 02:51:18.699929 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 02:51:18.699973 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 02:51:18.700019 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 02:51:18.700064 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 02:51:18.700108 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 02:51:18.700152 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 02:51:18.700196 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 02:51:18.700240 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 02:51:18.700284 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 02:51:18.700328 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 02:51:18.700374 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 02:51:18.700418 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 02:51:18.700462 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 02:51:18.700505 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 02:51:18.700566 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 9 02:51:18.700612 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 02:51:18.700656 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 02:51:18.700700 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 02:51:18.700744 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 02:51:18.700792 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 02:51:18.700837 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 02:51:18.700883 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 02:51:18.700927 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 02:51:18.700972 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 02:51:18.701016 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 02:51:18.701061 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 02:51:18.701105 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 02:51:18.701150 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 02:51:18.701193 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 02:51:18.701240 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 02:51:18.701285 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 02:51:18.701330 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 02:51:18.701374 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 02:51:18.701419 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 02:51:18.701463 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 02:51:18.701507 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 02:51:18.701564 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 02:51:18.701610 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 02:51:18.701657 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 02:51:18.701702 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 02:51:18.701746 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 02:51:18.701790 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 02:51:18.701835 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 02:51:18.701878 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 02:51:18.701922 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 02:51:18.702261 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 02:51:18.702311 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 02:51:18.702357 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 02:51:18.702405 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 02:51:18.702451 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 02:51:18.702496 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 02:51:18.702550 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 02:51:18.702596 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 02:51:18.702640 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 02:51:18.702684 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 02:51:18.702728 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 02:51:18.702772 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 02:51:18.702818 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 02:51:18.702861 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 02:51:18.702905 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 02:51:18.702948 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 02:51:18.702992 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 02:51:18.703035 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 02:51:18.703080 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 02:51:18.703123 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 02:51:18.703168 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 02:51:18.703211 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 02:51:18.703258 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 02:51:18.703302 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 02:51:18.703585 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 02:51:18.703637 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 02:51:18.703684 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 02:51:18.703729 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 02:51:18.703800 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 02:51:18.704064 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 02:51:18.704114 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 02:51:18.704163 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 02:51:18.704209 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 02:51:18.704255 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 02:51:18.704299 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 02:51:18.704345 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 02:51:18.704389 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 02:51:18.704433 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 02:51:18.704478 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 02:51:18.704522 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 02:51:18.704583 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 02:51:18.704632 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 02:51:18.704677 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 02:51:18.704722 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 02:51:18.704787 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 02:51:18.705060 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 02:51:18.705111 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 02:51:18.705362 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 02:51:18.705412 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 02:51:18.705460 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 02:51:18.705509 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 02:51:18.705648 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 02:51:18.705688 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 02:51:18.705727 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 02:51:18.705765 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 02:51:18.705803 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 02:51:18.705841 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Feb 9 02:51:18.705882 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Feb 9 02:51:18.705924 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 9 02:51:18.705965 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 9 02:51:18.706011 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 02:51:18.706051 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 02:51:18.706090 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 02:51:18.706130 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 02:51:18.706172 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 02:51:18.706211 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 02:51:18.706251 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 02:51:18.706290 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Feb 9 02:51:18.706330 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Feb 9 02:51:18.706374 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 9 02:51:18.706414 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 9 02:51:18.706454 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 02:51:18.706499 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 9 02:51:18.706552 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 9 02:51:18.706595 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 02:51:18.706639 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 9 02:51:18.706678 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 9 02:51:18.706718 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 02:51:18.706764 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 9 02:51:18.706807 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 02:51:18.706851 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 9 02:51:18.706892 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 02:51:18.706938 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 9 02:51:18.707288 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 02:51:18.707343 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 9 02:51:18.707387 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 02:51:18.707432 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 9 02:51:18.707473 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 02:51:18.707518 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 9 02:51:18.707605 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 9 02:51:18.707654 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 02:51:18.707706 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 9 02:51:18.707748 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 9 02:51:18.707801 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 02:51:18.707849 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 9 02:51:18.707999 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 9 02:51:18.708061 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 02:51:18.708111 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 9 02:51:18.708153 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 02:51:18.708208 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 9 02:51:18.708251 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 02:51:18.708295 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 9 02:51:18.708701 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 02:51:18.708753 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 9 02:51:18.708798 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 02:51:18.708846 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 9 02:51:18.709025 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 02:51:18.709075 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 9 02:51:18.709118 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 9 02:51:18.709158 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 02:51:18.709205 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 9 02:51:18.709246 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 9 02:51:18.709286 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 02:51:18.709331 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 9 02:51:18.709372 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 9 02:51:18.709413 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 02:51:18.709721 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 9 02:51:18.709770 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 02:51:18.710098 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 9 02:51:18.710144 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 02:51:18.710191 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 9 02:51:18.710233 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 02:51:18.710279 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 9 02:51:18.710324 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 02:51:18.710371 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 9 02:51:18.710412 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 02:51:18.710456 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 9 02:51:18.710497 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 9 02:51:18.710550 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 02:51:18.710599 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 9 02:51:18.710641 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 9 02:51:18.710682 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 02:51:18.710726 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 9 02:51:18.710874 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 02:51:18.710924 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 9 02:51:18.710969 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 02:51:18.711296 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 9 02:51:18.711344 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 02:51:18.711390 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 9 02:51:18.711432 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 02:51:18.711477 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 9 02:51:18.711522 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 02:51:18.711605 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 9 02:51:18.711647 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 02:51:18.711696 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 02:51:18.711705 kernel: PCI: CLS 32 bytes, default 64 Feb 9 02:51:18.711712 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 02:51:18.711719 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 02:51:18.711725 kernel: clocksource: Switched to clocksource tsc Feb 9 02:51:18.711733 kernel: Initialise system trusted keyrings Feb 9 02:51:18.711740 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 02:51:18.711746 kernel: Key type asymmetric registered Feb 9 02:51:18.711752 kernel: Asymmetric key parser 'x509' registered Feb 9 02:51:18.711758 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 02:51:18.711764 kernel: io scheduler mq-deadline registered Feb 9 02:51:18.711904 kernel: io scheduler kyber registered Feb 9 02:51:18.711913 kernel: io scheduler bfq registered Feb 9 02:51:18.712166 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 9 02:51:18.712224 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.712272 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 9 02:51:18.712607 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.712659 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 9 02:51:18.712707 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.712754 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 9 02:51:18.712801 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.712849 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 9 02:51:18.712895 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.712941 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 9 02:51:18.712986 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713031 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 9 02:51:18.713079 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713125 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 9 02:51:18.713172 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713218 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 9 02:51:18.713263 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713309 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 9 02:51:18.713356 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713400 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 9 02:51:18.713445 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713490 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 9 02:51:18.713548 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713597 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 9 02:51:18.713644 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713690 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 9 02:51:18.713735 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713888 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 9 02:51:18.713937 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.713987 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 9 02:51:18.714317 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.714366 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 9 02:51:18.714412 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.714459 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 9 02:51:18.714881 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.714935 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 9 02:51:18.714986 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.715037 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 9 02:51:18.715317 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.715374 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 9 02:51:18.715674 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.715726 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 9 02:51:18.715776 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.715827 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 9 02:51:18.715876 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.715921 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 9 02:51:18.715966 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716023 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 9 02:51:18.716069 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716114 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 9 02:51:18.716158 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716202 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 9 02:51:18.716247 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716293 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 9 02:51:18.716338 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716383 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 9 02:51:18.716428 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716471 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 9 02:51:18.716518 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716803 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 9 02:51:18.716853 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716899 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 9 02:51:18.716944 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 02:51:18.716953 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 02:51:18.716962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 02:51:18.716968 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 02:51:18.716974 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 9 02:51:18.716981 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 02:51:18.716987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 02:51:18.717033 kernel: rtc_cmos 00:01: registered as rtc0 Feb 9 02:51:18.717074 kernel: rtc_cmos 00:01: setting system clock to 2024-02-09T02:51:18 UTC (1707447078) Feb 9 02:51:18.717115 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 9 02:51:18.717182 kernel: fail to initialize ptp_kvm Feb 9 02:51:18.717193 kernel: intel_pstate: CPU model not supported Feb 9 02:51:18.717199 kernel: NET: Registered PF_INET6 protocol family Feb 9 02:51:18.717205 kernel: Segment Routing with IPv6 Feb 9 02:51:18.717211 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 02:51:18.717217 kernel: NET: Registered PF_PACKET protocol family Feb 9 02:51:18.717223 kernel: Key type dns_resolver registered Feb 9 02:51:18.717230 kernel: IPI shorthand broadcast: enabled Feb 9 02:51:18.717236 kernel: sched_clock: Marking stable (841361736, 220987329)->(1129441637, -67092572) Feb 9 02:51:18.717244 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 02:51:18.717250 kernel: registered taskstats version 1 Feb 9 02:51:18.717256 kernel: Loading compiled-in X.509 certificates Feb 9 02:51:18.717262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 02:51:18.717268 kernel: Key type .fscrypt registered Feb 9 02:51:18.717274 kernel: Key type fscrypt-provisioning registered Feb 9 02:51:18.717280 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 02:51:18.717286 kernel: ima: Allocated hash algorithm: sha1 Feb 9 02:51:18.717292 kernel: ima: No architecture policies found Feb 9 02:51:18.717299 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 02:51:18.717305 kernel: Write protecting the kernel read-only data: 28672k Feb 9 02:51:18.717311 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 02:51:18.717318 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 02:51:18.717324 kernel: Run /init as init process Feb 9 02:51:18.717330 kernel: with arguments: Feb 9 02:51:18.717336 kernel: /init Feb 9 02:51:18.717342 kernel: with environment: Feb 9 02:51:18.717348 kernel: HOME=/ Feb 9 02:51:18.717354 kernel: TERM=linux Feb 9 02:51:18.717360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 02:51:18.717368 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 02:51:18.717376 systemd[1]: Detected virtualization vmware. Feb 9 02:51:18.717382 systemd[1]: Detected architecture x86-64. Feb 9 02:51:18.717388 systemd[1]: Running in initrd. Feb 9 02:51:18.717395 systemd[1]: No hostname configured, using default hostname. Feb 9 02:51:18.717401 systemd[1]: Hostname set to . Feb 9 02:51:18.717409 systemd[1]: Initializing machine ID from random generator. Feb 9 02:51:18.717415 systemd[1]: Queued start job for default target initrd.target. Feb 9 02:51:18.717421 systemd[1]: Started systemd-ask-password-console.path. Feb 9 02:51:18.717427 systemd[1]: Reached target cryptsetup.target. Feb 9 02:51:18.717433 systemd[1]: Reached target paths.target. Feb 9 02:51:18.717439 systemd[1]: Reached target slices.target. Feb 9 02:51:18.717445 systemd[1]: Reached target swap.target. Feb 9 02:51:18.717451 systemd[1]: Reached target timers.target. Feb 9 02:51:18.717458 systemd[1]: Listening on iscsid.socket. Feb 9 02:51:18.717464 systemd[1]: Listening on iscsiuio.socket. Feb 9 02:51:18.717471 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 02:51:18.717477 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 02:51:18.717483 systemd[1]: Listening on systemd-journald.socket. Feb 9 02:51:18.717489 systemd[1]: Listening on systemd-networkd.socket. Feb 9 02:51:18.717495 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 02:51:18.717502 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 02:51:18.717508 systemd[1]: Reached target sockets.target. Feb 9 02:51:18.717515 systemd[1]: Starting kmod-static-nodes.service... Feb 9 02:51:18.717521 systemd[1]: Finished network-cleanup.service. Feb 9 02:51:18.717547 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 02:51:18.717555 systemd[1]: Starting systemd-journald.service... Feb 9 02:51:18.717562 systemd[1]: Starting systemd-modules-load.service... Feb 9 02:51:18.717568 systemd[1]: Starting systemd-resolved.service... Feb 9 02:51:18.717574 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 02:51:18.717580 systemd[1]: Finished kmod-static-nodes.service. Feb 9 02:51:18.717589 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 02:51:18.717595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 02:51:18.717601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 02:51:18.717608 kernel: audit: type=1130 audit(1707447078.655:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.717614 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 02:51:18.717620 kernel: audit: type=1130 audit(1707447078.658:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.717627 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 02:51:18.717633 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 02:51:18.717640 kernel: audit: type=1130 audit(1707447078.672:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.717646 systemd[1]: Starting dracut-cmdline.service... Feb 9 02:51:18.717652 systemd[1]: Started systemd-resolved.service. Feb 9 02:51:18.717658 kernel: audit: type=1130 audit(1707447078.683:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.717664 systemd[1]: Reached target nss-lookup.target. Feb 9 02:51:18.717672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 02:51:18.717678 kernel: Bridge firewalling registered Feb 9 02:51:18.717684 kernel: SCSI subsystem initialized Feb 9 02:51:18.717693 systemd-journald[216]: Journal started Feb 9 02:51:18.717726 systemd-journald[216]: Runtime Journal (/run/log/journal/f7dc53e0ba324733a3960f9009c8158a) is 4.8M, max 38.8M, 34.0M free. Feb 9 02:51:18.720953 systemd[1]: Started systemd-journald.service. Feb 9 02:51:18.720969 kernel: audit: type=1130 audit(1707447078.717:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.648232 systemd-modules-load[217]: Inserted module 'overlay' Feb 9 02:51:18.675385 systemd-resolved[218]: Positive Trust Anchors: Feb 9 02:51:18.675390 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 02:51:18.675410 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 02:51:18.726516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 02:51:18.726536 kernel: device-mapper: uevent: version 1.0.3 Feb 9 02:51:18.726548 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 02:51:18.683676 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 9 02:51:18.696627 systemd-modules-load[217]: Inserted module 'br_netfilter' Feb 9 02:51:18.727177 dracut-cmdline[233]: dracut-dracut-053 Feb 9 02:51:18.727177 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 02:51:18.727177 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 02:51:18.731730 systemd-modules-load[217]: Inserted module 'dm_multipath' Feb 9 02:51:18.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.732069 systemd[1]: Finished systemd-modules-load.service. Feb 9 02:51:18.736231 kernel: audit: type=1130 audit(1707447078.730:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.736244 kernel: Loading iSCSI transport class v2.0-870. Feb 9 02:51:18.732565 systemd[1]: Starting systemd-sysctl.service... Feb 9 02:51:18.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.741319 systemd[1]: Finished systemd-sysctl.service. Feb 9 02:51:18.744964 kernel: audit: type=1130 audit(1707447078.739:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.744979 kernel: iscsi: registered transport (tcp) Feb 9 02:51:18.759545 kernel: iscsi: registered transport (qla4xxx) Feb 9 02:51:18.759585 kernel: QLogic iSCSI HBA Driver Feb 9 02:51:18.775220 systemd[1]: Finished dracut-cmdline.service. Feb 9 02:51:18.775873 systemd[1]: Starting dracut-pre-udev.service... Feb 9 02:51:18.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.779542 kernel: audit: type=1130 audit(1707447078.773:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:18.813559 kernel: raid6: avx2x4 gen() 46642 MB/s Feb 9 02:51:18.829542 kernel: raid6: avx2x4 xor() 17464 MB/s Feb 9 02:51:18.846549 kernel: raid6: avx2x2 gen() 50515 MB/s Feb 9 02:51:18.863548 kernel: raid6: avx2x2 xor() 31809 MB/s Feb 9 02:51:18.880545 kernel: raid6: avx2x1 gen() 44600 MB/s Feb 9 02:51:18.897545 kernel: raid6: avx2x1 xor() 27699 MB/s Feb 9 02:51:18.914569 kernel: raid6: sse2x4 gen() 20491 MB/s Feb 9 02:51:18.931552 kernel: raid6: sse2x4 xor() 8791 MB/s Feb 9 02:51:18.948545 kernel: raid6: sse2x2 gen() 20118 MB/s Feb 9 02:51:18.965546 kernel: raid6: sse2x2 xor() 13351 MB/s Feb 9 02:51:18.982545 kernel: raid6: sse2x1 gen() 18146 MB/s Feb 9 02:51:18.999756 kernel: raid6: sse2x1 xor() 8857 MB/s Feb 9 02:51:18.999794 kernel: raid6: using algorithm avx2x2 gen() 50515 MB/s Feb 9 02:51:18.999801 kernel: raid6: .... xor() 31809 MB/s, rmw enabled Feb 9 02:51:19.000944 kernel: raid6: using avx2x2 recovery algorithm Feb 9 02:51:19.009551 kernel: xor: automatically using best checksumming function avx Feb 9 02:51:19.069553 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 02:51:19.075468 systemd[1]: Finished dracut-pre-udev.service. Feb 9 02:51:19.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:19.076114 systemd[1]: Starting systemd-udevd.service... Feb 9 02:51:19.074000 audit: BPF prog-id=7 op=LOAD Feb 9 02:51:19.074000 audit: BPF prog-id=8 op=LOAD Feb 9 02:51:19.079550 kernel: audit: type=1130 audit(1707447079.074:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:19.086101 systemd-udevd[416]: Using default interface naming scheme 'v252'. Feb 9 02:51:19.088724 systemd[1]: Started systemd-udevd.service. Feb 9 02:51:19.089290 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 02:51:19.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:19.097262 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Feb 9 02:51:19.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:19.113176 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 02:51:19.113703 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 02:51:19.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:19.176336 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 02:51:19.234541 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 9 02:51:19.236538 kernel: vmw_pvscsi: using 64bit dma Feb 9 02:51:19.243560 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Feb 9 02:51:19.243598 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 9 02:51:19.245038 kernel: vmw_pvscsi: max_id: 16 Feb 9 02:51:19.245057 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 9 02:51:19.248012 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 9 02:51:19.252116 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 9 02:51:19.252140 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 9 02:51:19.252152 kernel: vmw_pvscsi: using MSI-X Feb 9 02:51:19.258541 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 9 02:51:19.259538 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 9 02:51:19.262487 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 02:51:19.262520 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 9 02:51:19.274562 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 9 02:51:19.274665 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 02:51:19.274674 kernel: libata version 3.00 loaded. Feb 9 02:51:19.277766 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 9 02:51:19.277850 kernel: AES CTR mode by8 optimization enabled Feb 9 02:51:19.279545 kernel: scsi host1: ata_piix Feb 9 02:51:19.283538 kernel: scsi host2: ata_piix Feb 9 02:51:19.283617 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 9 02:51:19.283627 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 9 02:51:19.451550 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 9 02:51:19.458198 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 9 02:51:19.464570 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 9 02:51:19.464662 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 02:51:19.464723 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 9 02:51:19.464778 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 9 02:51:19.464832 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 9 02:51:19.469547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 02:51:19.470538 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 02:51:19.493544 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 9 02:51:19.493664 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 02:51:19.496540 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (476) Feb 9 02:51:19.499768 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 02:51:19.500080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 02:51:19.500718 systemd[1]: Starting disk-uuid.service... Feb 9 02:51:19.504701 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 02:51:19.516397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 02:51:19.519215 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 02:51:19.520563 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 9 02:51:20.591981 disk-uuid[548]: The operation has completed successfully. Feb 9 02:51:20.592539 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 02:51:20.963048 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 02:51:20.963400 systemd[1]: Finished disk-uuid.service. Feb 9 02:51:20.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:20.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:20.964323 systemd[1]: Starting verity-setup.service... Feb 9 02:51:21.004578 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 02:51:21.638268 systemd[1]: Found device dev-mapper-usr.device. Feb 9 02:51:21.639166 systemd[1]: Mounting sysusr-usr.mount... Feb 9 02:51:21.639793 systemd[1]: Finished verity-setup.service. Feb 9 02:51:21.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:21.697237 systemd[1]: Mounted sysusr-usr.mount. Feb 9 02:51:21.697541 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 02:51:21.698068 systemd[1]: Starting afterburn-network-kargs.service... Feb 9 02:51:21.698848 systemd[1]: Starting ignition-setup.service... Feb 9 02:51:21.724547 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 02:51:21.724585 kernel: BTRFS info (device sda6): using free space tree Feb 9 02:51:21.724594 kernel: BTRFS info (device sda6): has skinny extents Feb 9 02:51:21.731553 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 02:51:21.739185 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 02:51:21.744408 systemd[1]: Finished ignition-setup.service. Feb 9 02:51:21.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:21.745023 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 02:51:21.930967 systemd[1]: Finished afterburn-network-kargs.service. Feb 9 02:51:21.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:21.931729 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 02:51:21.978425 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 02:51:21.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:21.977000 audit: BPF prog-id=9 op=LOAD Feb 9 02:51:21.979434 systemd[1]: Starting systemd-networkd.service... Feb 9 02:51:21.998707 systemd-networkd[734]: lo: Link UP Feb 9 02:51:21.998972 systemd-networkd[734]: lo: Gained carrier Feb 9 02:51:21.999585 systemd-networkd[734]: Enumeration completed Feb 9 02:51:22.000056 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 9 02:51:22.000126 systemd[1]: Started systemd-networkd.service. Feb 9 02:51:22.000490 systemd[1]: Reached target network.target. Feb 9 02:51:21.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.001176 systemd[1]: Starting iscsiuio.service... Feb 9 02:51:22.005065 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 02:51:22.005200 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 02:51:22.005469 systemd[1]: Started iscsiuio.service. Feb 9 02:51:22.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.005999 systemd-networkd[734]: ens192: Link UP Feb 9 02:51:22.006130 systemd-networkd[734]: ens192: Gained carrier Feb 9 02:51:22.006623 systemd[1]: Starting iscsid.service... Feb 9 02:51:22.009719 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 02:51:22.009719 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 02:51:22.009719 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 02:51:22.009719 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 02:51:22.009719 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 02:51:22.009719 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 02:51:22.009964 systemd[1]: Started iscsid.service. Feb 9 02:51:22.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.011326 systemd[1]: Starting dracut-initqueue.service... Feb 9 02:51:22.019001 systemd[1]: Finished dracut-initqueue.service. Feb 9 02:51:22.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.019392 systemd[1]: Reached target remote-fs-pre.target. Feb 9 02:51:22.019497 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 02:51:22.019667 systemd[1]: Reached target remote-fs.target. Feb 9 02:51:22.020677 systemd[1]: Starting dracut-pre-mount.service... Feb 9 02:51:22.025661 systemd[1]: Finished dracut-pre-mount.service. Feb 9 02:51:22.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.064543 ignition[606]: Ignition 2.14.0 Feb 9 02:51:22.064552 ignition[606]: Stage: fetch-offline Feb 9 02:51:22.064597 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 02:51:22.064612 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 02:51:22.067641 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 02:51:22.067734 ignition[606]: parsed url from cmdline: "" Feb 9 02:51:22.067736 ignition[606]: no config URL provided Feb 9 02:51:22.067739 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 02:51:22.067744 ignition[606]: no config at "/usr/lib/ignition/user.ign" Feb 9 02:51:22.068385 ignition[606]: config successfully fetched Feb 9 02:51:22.068418 ignition[606]: parsing config with SHA512: 132be482aef5d2cf657e32ef7366ec7933bc022215133f0ec07b615cdc37d26c0f300d2022e35c2feedabe963731c045d0a1ab679cb46a3ded7a8e2ae38df22a Feb 9 02:51:22.098702 unknown[606]: fetched base config from "system" Feb 9 02:51:22.098901 unknown[606]: fetched user config from "vmware" Feb 9 02:51:22.099492 ignition[606]: fetch-offline: fetch-offline passed Feb 9 02:51:22.099667 ignition[606]: Ignition finished successfully Feb 9 02:51:22.100339 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 02:51:22.100503 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 02:51:22.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.100957 systemd[1]: Starting ignition-kargs.service... Feb 9 02:51:22.105874 ignition[754]: Ignition 2.14.0 Feb 9 02:51:22.106102 ignition[754]: Stage: kargs Feb 9 02:51:22.106271 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 02:51:22.106422 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 02:51:22.107732 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 02:51:22.109286 ignition[754]: kargs: kargs passed Feb 9 02:51:22.109427 ignition[754]: Ignition finished successfully Feb 9 02:51:22.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.110269 systemd[1]: Finished ignition-kargs.service. Feb 9 02:51:22.110842 systemd[1]: Starting ignition-disks.service... Feb 9 02:51:22.115385 ignition[760]: Ignition 2.14.0 Feb 9 02:51:22.115633 ignition[760]: Stage: disks Feb 9 02:51:22.115800 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 02:51:22.115950 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 02:51:22.117093 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 02:51:22.118731 ignition[760]: disks: disks passed Feb 9 02:51:22.118769 ignition[760]: Ignition finished successfully Feb 9 02:51:22.119397 systemd[1]: Finished ignition-disks.service. Feb 9 02:51:22.119577 systemd[1]: Reached target initrd-root-device.target. Feb 9 02:51:22.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.119687 systemd[1]: Reached target local-fs-pre.target. Feb 9 02:51:22.119904 systemd[1]: Reached target local-fs.target. Feb 9 02:51:22.120029 systemd[1]: Reached target sysinit.target. Feb 9 02:51:22.120213 systemd[1]: Reached target basic.target. Feb 9 02:51:22.120908 systemd[1]: Starting systemd-fsck-root.service... Feb 9 02:51:22.157879 systemd-fsck[768]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 02:51:22.164069 systemd[1]: Finished systemd-fsck-root.service. Feb 9 02:51:22.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.164643 systemd[1]: Mounting sysroot.mount... Feb 9 02:51:22.266568 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 02:51:22.266192 systemd[1]: Mounted sysroot.mount. Feb 9 02:51:22.266382 systemd[1]: Reached target initrd-root-fs.target. Feb 9 02:51:22.267586 systemd[1]: Mounting sysroot-usr.mount... Feb 9 02:51:22.268057 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 02:51:22.268098 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 02:51:22.268119 systemd[1]: Reached target ignition-diskful.target. Feb 9 02:51:22.271186 systemd[1]: Mounted sysroot-usr.mount. Feb 9 02:51:22.271793 systemd[1]: Starting initrd-setup-root.service... Feb 9 02:51:22.275568 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 02:51:22.279677 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Feb 9 02:51:22.282191 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 02:51:22.284956 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 02:51:22.353587 systemd[1]: Finished initrd-setup-root.service. Feb 9 02:51:22.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.354180 systemd[1]: Starting ignition-mount.service... Feb 9 02:51:22.354661 systemd[1]: Starting sysroot-boot.service... Feb 9 02:51:22.358471 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 02:51:22.363663 ignition[820]: INFO : Ignition 2.14.0 Feb 9 02:51:22.363928 ignition[820]: INFO : Stage: mount Feb 9 02:51:22.364124 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 02:51:22.364294 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 02:51:22.365822 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 02:51:22.367606 ignition[820]: INFO : mount: mount passed Feb 9 02:51:22.367766 ignition[820]: INFO : Ignition finished successfully Feb 9 02:51:22.368342 systemd[1]: Finished ignition-mount.service. Feb 9 02:51:22.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.375454 systemd[1]: Finished sysroot-boot.service. Feb 9 02:51:22.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:51:22.654889 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 02:51:22.662542 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (829) Feb 9 02:51:22.662571 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 02:51:22.664166 kernel: BTRFS info (device sda6): using free space tree Feb 9 02:51:22.664182 kernel: BTRFS info (device sda6): has skinny extents Feb 9 02:51:22.670543 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 02:51:22.671721 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 02:51:22.672471 systemd[1]: Starting ignition-files.service... Feb 9 02:51:22.683540 ignition[849]: INFO : Ignition 2.14.0 Feb 9 02:51:22.683540 ignition[849]: INFO : Stage: files Feb 9 02:51:22.683890 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 02:51:22.683890 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 02:51:22.685287 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 02:51:22.687970 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Feb 9 02:51:22.688490 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 02:51:22.688490 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 02:51:22.690872 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 02:51:22.691106 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 02:51:22.691871 unknown[849]: wrote ssh authorized keys file for user: core Feb 9 02:51:22.692307 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 02:51:22.692841 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 02:51:22.692841 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 02:51:23.871764 systemd-networkd[734]: ens192: Gained IPv6LL Feb 9 02:51:23.943073 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 02:54:51.811974 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 02:54:51.812672 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 02:54:51.812929 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 02:54:52.281469 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 02:54:52.405687 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 02:54:52.406069 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 02:54:52.406287 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 02:54:52.406512 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 02:54:52.836732 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 02:54:52.932396 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 02:54:52.932748 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 02:54:52.932748 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 02:54:52.933074 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 02:54:53.086149 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 02:54:53.321391 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 02:54:53.321710 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 02:54:53.321710 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 02:54:53.321710 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 02:54:53.369990 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 02:54:53.957291 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 02:54:53.957764 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 02:54:53.957960 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 02:54:53.958160 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 02:54:54.004549 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 02:54:54.270421 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 02:54:54.270744 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 02:54:54.270744 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 02:54:54.270744 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 02:54:54.271922 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 02:54:54.272094 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 02:54:54.676292 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 02:54:54.709965 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 02:54:54.710207 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 02:54:54.710207 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 02:54:54.710207 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 02:54:54.710207 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 02:54:54.710207 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 02:54:54.710997 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 02:54:54.710997 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 02:54:54.710997 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 02:54:54.715495 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 02:54:54.715674 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 02:54:54.726344 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 02:54:54.726590 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 02:54:54.744793 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2056153830" Feb 9 02:54:54.751617 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (854) Feb 9 02:54:54.751650 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2056153830": device or resource busy Feb 9 02:54:54.751650 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2056153830", trying btrfs: device or resource busy Feb 9 02:54:54.751650 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2056153830" Feb 9 02:54:54.751650 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2056153830" Feb 9 02:54:54.759492 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2056153830" Feb 9 02:54:54.760286 systemd[1]: mnt-oem2056153830.mount: Deactivated successfully. Feb 9 02:54:54.760581 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2056153830" Feb 9 02:54:54.760741 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 02:54:54.768936 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 02:54:54.769115 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(15): [started] processing unit "vmtoolsd.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(15): [finished] processing unit "vmtoolsd.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(18): [started] processing unit "prepare-cni-plugins.service" Feb 9 02:54:54.769115 ignition[849]: INFO : files: op(18): op(19): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(18): [finished] processing unit "prepare-cni-plugins.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-helm.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(20): [started] setting preset to enabled for "vmtoolsd.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(20): [finished] setting preset to enabled for "vmtoolsd.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(21): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 02:54:54.770439 ignition[849]: INFO : files: op(21): op(22): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 02:54:55.183295 ignition[849]: INFO : files: op(21): op(22): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 02:54:55.183762 ignition[849]: INFO : files: op(21): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 02:54:55.183762 ignition[849]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 02:54:55.183762 ignition[849]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 02:54:55.183762 ignition[849]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 02:54:55.183762 ignition[849]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 02:54:55.183762 ignition[849]: INFO : files: files passed Feb 9 02:54:55.183762 ignition[849]: INFO : Ignition finished successfully Feb 9 02:54:55.190653 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 9 02:54:55.190672 kernel: audit: type=1130 audit(1707447295.185:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.184388 systemd[1]: Finished ignition-files.service. Feb 9 02:54:55.190978 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 02:54:55.191271 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 02:54:55.191843 systemd[1]: Starting ignition-quench.service... Feb 9 02:54:55.194525 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 02:54:55.194747 systemd[1]: Finished ignition-quench.service. Feb 9 02:54:55.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.199795 kernel: audit: type=1130 audit(1707447295.193:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.199814 kernel: audit: type=1131 audit(1707447295.193:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.200599 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 02:54:55.201067 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 02:54:55.203768 kernel: audit: type=1130 audit(1707447295.199:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.201229 systemd[1]: Reached target ignition-complete.target. Feb 9 02:54:55.204195 systemd[1]: Starting initrd-parse-etc.service... Feb 9 02:54:55.212664 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 02:54:55.212852 systemd[1]: Finished initrd-parse-etc.service. Feb 9 02:54:55.213126 systemd[1]: Reached target initrd-fs.target. Feb 9 02:54:55.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.215543 kernel: audit: type=1130 audit(1707447295.211:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.215638 systemd[1]: Reached target initrd.target. Feb 9 02:54:55.218170 kernel: audit: type=1131 audit(1707447295.211:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.218122 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 02:54:55.218562 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 02:54:55.225417 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 02:54:55.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.228486 systemd[1]: Starting initrd-cleanup.service... Feb 9 02:54:55.228605 kernel: audit: type=1130 audit(1707447295.224:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.233546 systemd[1]: Stopped target network.target. Feb 9 02:54:55.233821 systemd[1]: Stopped target nss-lookup.target. Feb 9 02:54:55.234070 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 02:54:55.234341 systemd[1]: Stopped target timers.target. Feb 9 02:54:55.234599 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 02:54:55.234786 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 02:54:55.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.237434 systemd[1]: Stopped target initrd.target. Feb 9 02:54:55.237567 kernel: audit: type=1131 audit(1707447295.233:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.237729 systemd[1]: Stopped target basic.target. Feb 9 02:54:55.237976 systemd[1]: Stopped target ignition-complete.target. Feb 9 02:54:55.238266 systemd[1]: Stopped target ignition-diskful.target. Feb 9 02:54:55.238518 systemd[1]: Stopped target initrd-root-device.target. Feb 9 02:54:55.238781 systemd[1]: Stopped target remote-fs.target. Feb 9 02:54:55.239022 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 02:54:55.239278 systemd[1]: Stopped target sysinit.target. Feb 9 02:54:55.239524 systemd[1]: Stopped target local-fs.target. Feb 9 02:54:55.239770 systemd[1]: Stopped target local-fs-pre.target. Feb 9 02:54:55.240016 systemd[1]: Stopped target swap.target. Feb 9 02:54:55.240239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 02:54:55.240432 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 02:54:55.240721 systemd[1]: Stopped target cryptsetup.target. Feb 9 02:54:55.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.243417 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 02:54:55.243545 kernel: audit: type=1131 audit(1707447295.239:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.243611 systemd[1]: Stopped dracut-initqueue.service. Feb 9 02:54:55.243789 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 02:54:55.246232 kernel: audit: type=1131 audit(1707447295.242:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.243845 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 02:54:55.246368 systemd[1]: Stopped target paths.target. Feb 9 02:54:55.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.246496 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 02:54:55.248559 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 02:54:55.248711 systemd[1]: Stopped target slices.target. Feb 9 02:54:55.248892 systemd[1]: Stopped target sockets.target. Feb 9 02:54:55.249048 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 02:54:55.249086 systemd[1]: Closed iscsid.socket. Feb 9 02:54:55.249231 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 02:54:55.249269 systemd[1]: Closed iscsiuio.socket. Feb 9 02:54:55.249478 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 02:54:55.249544 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 02:54:55.249768 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 02:54:55.249820 systemd[1]: Stopped ignition-files.service. Feb 9 02:54:55.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.250776 systemd[1]: Stopping ignition-mount.service... Feb 9 02:54:55.250891 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 02:54:55.250983 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 02:54:55.251593 systemd[1]: Stopping sysroot-boot.service... Feb 9 02:54:55.251820 systemd[1]: Stopping systemd-networkd.service... Feb 9 02:54:55.252021 systemd[1]: Stopping systemd-resolved.service... Feb 9 02:54:55.252112 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 02:54:55.252287 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 02:54:55.252479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 02:54:55.252577 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 02:54:55.258551 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 02:54:55.259107 systemd[1]: Stopped systemd-resolved.service. Feb 9 02:54:55.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.261023 ignition[889]: INFO : Ignition 2.14.0 Feb 9 02:54:55.261023 ignition[889]: INFO : Stage: umount Feb 9 02:54:55.260960 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 02:54:55.261093 systemd[1]: Stopped systemd-networkd.service. Feb 9 02:54:55.261972 ignition[889]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 02:54:55.261972 ignition[889]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 02:54:55.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.263608 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 02:54:55.264131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 02:54:55.264519 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 02:54:55.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.264577 systemd[1]: Stopped sysroot-boot.service. Feb 9 02:54:55.265326 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 02:54:55.265368 systemd[1]: Finished initrd-cleanup.service. Feb 9 02:54:55.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.264000 audit: BPF prog-id=6 op=UNLOAD Feb 9 02:54:55.264000 audit: BPF prog-id=9 op=UNLOAD Feb 9 02:54:55.266046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 02:54:55.266075 systemd[1]: Closed systemd-networkd.socket. Feb 9 02:54:55.266640 systemd[1]: Stopping network-cleanup.service... Feb 9 02:54:55.266757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 02:54:55.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.266791 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 02:54:55.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.266969 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 9 02:54:55.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.266993 systemd[1]: Stopped afterburn-network-kargs.service. Feb 9 02:54:55.267135 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 02:54:55.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.267163 systemd[1]: Stopped systemd-sysctl.service. Feb 9 02:54:55.267401 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 02:54:55.267424 systemd[1]: Stopped systemd-modules-load.service. Feb 9 02:54:55.268298 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 02:54:55.272492 ignition[889]: INFO : umount: umount passed Feb 9 02:54:55.272725 ignition[889]: INFO : Ignition finished successfully Feb 9 02:54:55.272823 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 02:54:55.272876 systemd[1]: Stopped network-cleanup.service. Feb 9 02:54:55.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.273554 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 02:54:55.273605 systemd[1]: Stopped ignition-mount.service. Feb 9 02:54:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.273821 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 02:54:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.273843 systemd[1]: Stopped ignition-disks.service. Feb 9 02:54:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.273962 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 02:54:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.273980 systemd[1]: Stopped ignition-kargs.service. Feb 9 02:54:55.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.274125 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 02:54:55.274143 systemd[1]: Stopped ignition-setup.service. Feb 9 02:54:55.274282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 02:54:55.274301 systemd[1]: Stopped initrd-setup-root.service. Feb 9 02:54:55.274723 systemd[1]: Stopping systemd-udevd.service... Feb 9 02:54:55.277303 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 02:54:55.277376 systemd[1]: Stopped systemd-udevd.service. Feb 9 02:54:55.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.277740 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 02:54:55.277764 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 02:54:55.277970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 02:54:55.277985 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 02:54:55.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.278135 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 02:54:55.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.278158 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 02:54:55.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.278331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 02:54:55.278352 systemd[1]: Stopped dracut-cmdline.service. Feb 9 02:54:55.278482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 02:54:55.278500 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 02:54:55.279064 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 02:54:55.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.279186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 02:54:55.279213 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 02:54:55.282881 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 02:54:55.282927 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 02:54:55.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:55.283210 systemd[1]: Reached target initrd-switch-root.target. Feb 9 02:54:55.283815 systemd[1]: Starting initrd-switch-root.service... Feb 9 02:54:55.290706 systemd[1]: Switching root. Feb 9 02:54:55.303358 iscsid[739]: iscsid shutting down. Feb 9 02:54:55.303581 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Feb 9 02:54:55.303617 systemd-journald[216]: Journal stopped Feb 9 02:54:57.238503 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 02:54:57.238526 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 02:54:57.238554 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 02:54:57.238561 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 02:54:57.238567 kernel: SELinux: policy capability open_perms=1 Feb 9 02:54:57.238572 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 02:54:57.238603 kernel: SELinux: policy capability always_check_network=0 Feb 9 02:54:57.238611 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 02:54:57.238617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 02:54:57.238622 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 02:54:57.238628 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 02:54:57.238635 systemd[1]: Successfully loaded SELinux policy in 44.690ms. Feb 9 02:54:57.238644 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.553ms. Feb 9 02:54:57.238652 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 02:54:57.238659 systemd[1]: Detected virtualization vmware. Feb 9 02:54:57.238665 systemd[1]: Detected architecture x86-64. Feb 9 02:54:57.238672 systemd[1]: Detected first boot. Feb 9 02:54:57.238680 systemd[1]: Initializing machine ID from random generator. Feb 9 02:54:57.238694 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 02:54:57.238707 systemd[1]: Populated /etc with preset unit settings. Feb 9 02:54:57.238715 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 02:54:57.238722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 02:54:57.238730 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 02:54:57.238737 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 02:54:57.238745 systemd[1]: Stopped iscsiuio.service. Feb 9 02:54:57.238752 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 02:54:57.238759 systemd[1]: Stopped iscsid.service. Feb 9 02:54:57.238766 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 02:54:57.238772 systemd[1]: Stopped initrd-switch-root.service. Feb 9 02:54:57.238779 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 02:54:57.238785 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 02:54:57.238793 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 02:54:57.238800 systemd[1]: Created slice system-getty.slice. Feb 9 02:54:57.238807 systemd[1]: Created slice system-modprobe.slice. Feb 9 02:54:57.238813 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 02:54:57.238820 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 02:54:57.238828 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 02:54:57.240336 systemd[1]: Created slice user.slice. Feb 9 02:54:57.240347 systemd[1]: Started systemd-ask-password-console.path. Feb 9 02:54:57.240355 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 02:54:57.240364 systemd[1]: Set up automount boot.automount. Feb 9 02:54:57.240373 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 02:54:57.240379 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 02:54:57.240386 systemd[1]: Stopped target initrd-fs.target. Feb 9 02:54:57.240393 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 02:54:57.240400 systemd[1]: Reached target integritysetup.target. Feb 9 02:54:57.240406 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 02:54:57.240413 systemd[1]: Reached target remote-fs.target. Feb 9 02:54:57.240421 systemd[1]: Reached target slices.target. Feb 9 02:54:57.240428 systemd[1]: Reached target swap.target. Feb 9 02:54:57.240435 systemd[1]: Reached target torcx.target. Feb 9 02:54:57.240441 systemd[1]: Reached target veritysetup.target. Feb 9 02:54:57.240449 systemd[1]: Listening on systemd-coredump.socket. Feb 9 02:54:57.240457 systemd[1]: Listening on systemd-initctl.socket. Feb 9 02:54:57.240463 systemd[1]: Listening on systemd-networkd.socket. Feb 9 02:54:57.240471 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 02:54:57.240477 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 02:54:57.240485 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 02:54:57.240492 systemd[1]: Mounting dev-hugepages.mount... Feb 9 02:54:57.240499 systemd[1]: Mounting dev-mqueue.mount... Feb 9 02:54:57.240508 systemd[1]: Mounting media.mount... Feb 9 02:54:57.240519 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 02:54:57.240541 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 02:54:57.240554 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 02:54:57.240563 systemd[1]: Mounting tmp.mount... Feb 9 02:54:57.240570 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 02:54:57.240577 systemd[1]: Starting ignition-delete-config.service... Feb 9 02:54:57.240584 systemd[1]: Starting kmod-static-nodes.service... Feb 9 02:54:57.240590 systemd[1]: Starting modprobe@configfs.service... Feb 9 02:54:57.240597 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 02:54:57.240604 systemd[1]: Starting modprobe@drm.service... Feb 9 02:54:57.240613 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 02:54:57.240620 systemd[1]: Starting modprobe@fuse.service... Feb 9 02:54:57.240627 systemd[1]: Starting modprobe@loop.service... Feb 9 02:54:57.240635 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 02:54:57.240642 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 02:54:57.240648 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 02:54:57.240655 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 02:54:57.240662 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 02:54:57.240669 systemd[1]: Stopped systemd-journald.service. Feb 9 02:54:57.240677 systemd[1]: Starting systemd-journald.service... Feb 9 02:54:57.240684 kernel: fuse: init (API version 7.34) Feb 9 02:54:57.240690 systemd[1]: Starting systemd-modules-load.service... Feb 9 02:54:57.240698 systemd[1]: Starting systemd-network-generator.service... Feb 9 02:54:57.240705 systemd[1]: Starting systemd-remount-fs.service... Feb 9 02:54:57.240712 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 02:54:57.240719 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 02:54:57.241008 systemd[1]: Stopped verity-setup.service. Feb 9 02:54:57.241020 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 02:54:57.241029 systemd[1]: Mounted dev-hugepages.mount. Feb 9 02:54:57.241037 systemd[1]: Mounted dev-mqueue.mount. Feb 9 02:54:57.241044 systemd[1]: Mounted media.mount. Feb 9 02:54:57.241051 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 02:54:57.241057 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 02:54:57.241064 systemd[1]: Mounted tmp.mount. Feb 9 02:54:57.241072 systemd[1]: Finished kmod-static-nodes.service. Feb 9 02:54:57.241079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 02:54:57.241086 systemd[1]: Finished modprobe@configfs.service. Feb 9 02:54:57.241094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 02:54:57.241102 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 02:54:57.241108 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 02:54:57.241116 systemd[1]: Finished modprobe@drm.service. Feb 9 02:54:57.241123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 02:54:57.241131 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 02:54:57.241137 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 02:54:57.241144 systemd[1]: Finished modprobe@fuse.service. Feb 9 02:54:57.241153 systemd[1]: Finished systemd-modules-load.service. Feb 9 02:54:57.241160 systemd[1]: Finished systemd-network-generator.service. Feb 9 02:54:57.241167 systemd[1]: Finished systemd-remount-fs.service. Feb 9 02:54:57.241174 systemd[1]: Reached target network-pre.target. Feb 9 02:54:57.241180 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 02:54:57.241192 systemd-journald[1016]: Journal started Feb 9 02:54:57.241226 systemd-journald[1016]: Runtime Journal (/run/log/journal/97c854b311c4481fb4be23daba417a27) is 4.8M, max 38.8M, 34.0M free. Feb 9 02:54:55.389000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 02:54:55.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 02:54:55.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 02:54:55.429000 audit: BPF prog-id=10 op=LOAD Feb 9 02:54:55.429000 audit: BPF prog-id=10 op=UNLOAD Feb 9 02:54:55.429000 audit: BPF prog-id=11 op=LOAD Feb 9 02:54:55.429000 audit: BPF prog-id=11 op=UNLOAD Feb 9 02:54:55.510000 audit[924]: AVC avc: denied { associate } for pid=924 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 02:54:55.510000 audit[924]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 02:54:55.510000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 02:54:55.511000 audit[924]: AVC avc: denied { associate } for pid=924 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 02:54:55.511000 audit[924]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b9 a2=1ed a3=0 items=2 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 02:54:55.511000 audit: CWD cwd="/" Feb 9 02:54:55.511000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:55.511000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:55.511000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 02:54:57.110000 audit: BPF prog-id=12 op=LOAD Feb 9 02:54:57.110000 audit: BPF prog-id=3 op=UNLOAD Feb 9 02:54:57.110000 audit: BPF prog-id=13 op=LOAD Feb 9 02:54:57.110000 audit: BPF prog-id=14 op=LOAD Feb 9 02:54:57.110000 audit: BPF prog-id=4 op=UNLOAD Feb 9 02:54:57.110000 audit: BPF prog-id=5 op=UNLOAD Feb 9 02:54:57.112000 audit: BPF prog-id=15 op=LOAD Feb 9 02:54:57.112000 audit: BPF prog-id=12 op=UNLOAD Feb 9 02:54:57.112000 audit: BPF prog-id=16 op=LOAD Feb 9 02:54:57.112000 audit: BPF prog-id=17 op=LOAD Feb 9 02:54:57.112000 audit: BPF prog-id=13 op=UNLOAD Feb 9 02:54:57.112000 audit: BPF prog-id=14 op=UNLOAD Feb 9 02:54:57.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.115000 audit: BPF prog-id=15 op=UNLOAD Feb 9 02:54:57.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.189000 audit: BPF prog-id=18 op=LOAD Feb 9 02:54:57.189000 audit: BPF prog-id=19 op=LOAD Feb 9 02:54:57.189000 audit: BPF prog-id=20 op=LOAD Feb 9 02:54:57.189000 audit: BPF prog-id=16 op=UNLOAD Feb 9 02:54:57.189000 audit: BPF prog-id=17 op=UNLOAD Feb 9 02:54:57.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.225000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 02:54:57.225000 audit[1016]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdea42b5b0 a2=4000 a3=7ffdea42b64c items=0 ppid=1 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 02:54:57.225000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 02:54:57.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.110748 systemd[1]: Queued start job for default target multi-user.target. Feb 9 02:54:55.509681 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 02:54:57.114730 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 02:54:55.510171 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 02:54:57.245563 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 02:54:57.245579 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 02:54:55.510183 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 02:54:55.510203 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 02:54:55.510209 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 02:54:55.510227 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 02:54:55.510234 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 02:54:55.510356 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 02:54:57.245912 jq[991]: true Feb 9 02:54:55.510377 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 02:54:55.510383 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 02:54:55.510909 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 02:54:55.510929 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 02:54:55.510941 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 02:54:55.510949 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 02:54:55.510958 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 02:54:55.510966 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 02:54:56.925407 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 02:54:56.925572 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 02:54:56.925640 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 02:54:56.925749 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 02:54:57.246577 jq[1025]: true Feb 9 02:54:56.925785 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 02:54:56.925826 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2024-02-09T02:54:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 02:54:57.250891 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 02:54:57.250910 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 02:54:57.253654 systemd[1]: Starting systemd-random-seed.service... Feb 9 02:54:57.255546 systemd[1]: Starting systemd-sysctl.service... Feb 9 02:54:57.258563 systemd[1]: Started systemd-journald.service. Feb 9 02:54:57.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.261132 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 02:54:57.261284 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 02:54:57.263902 systemd[1]: Starting systemd-journal-flush.service... Feb 9 02:54:57.270769 kernel: loop: module loaded Feb 9 02:54:57.270975 systemd-journald[1016]: Time spent on flushing to /var/log/journal/97c854b311c4481fb4be23daba417a27 is 27.693ms for 2035 entries. Feb 9 02:54:57.270975 systemd-journald[1016]: System Journal (/var/log/journal/97c854b311c4481fb4be23daba417a27) is 8.0M, max 584.8M, 576.8M free. Feb 9 02:54:57.304533 systemd-journald[1016]: Received client request to flush runtime journal. Feb 9 02:54:57.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.272583 systemd[1]: Finished systemd-sysctl.service. Feb 9 02:54:57.273604 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 02:54:57.273672 systemd[1]: Finished modprobe@loop.service. Feb 9 02:54:57.273839 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 02:54:57.274803 systemd[1]: Finished systemd-random-seed.service. Feb 9 02:54:57.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.274929 systemd[1]: Reached target first-boot-complete.target. Feb 9 02:54:57.276596 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 02:54:57.277493 systemd[1]: Starting systemd-sysusers.service... Feb 9 02:54:57.305199 systemd[1]: Finished systemd-journal-flush.service. Feb 9 02:54:57.337605 systemd[1]: Finished systemd-sysusers.service. Feb 9 02:54:57.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.370897 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 02:54:57.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.371955 systemd[1]: Starting systemd-udev-settle.service... Feb 9 02:54:57.378484 udevadm[1054]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 02:54:57.434993 ignition[1037]: Ignition 2.14.0 Feb 9 02:54:57.435381 ignition[1037]: deleting config from guestinfo properties Feb 9 02:54:57.439112 ignition[1037]: Successfully deleted config Feb 9 02:54:57.440101 systemd[1]: Finished ignition-delete-config.service. Feb 9 02:54:57.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.687293 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 02:54:57.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.686000 audit: BPF prog-id=21 op=LOAD Feb 9 02:54:57.686000 audit: BPF prog-id=22 op=LOAD Feb 9 02:54:57.686000 audit: BPF prog-id=7 op=UNLOAD Feb 9 02:54:57.686000 audit: BPF prog-id=8 op=UNLOAD Feb 9 02:54:57.688819 systemd[1]: Starting systemd-udevd.service... Feb 9 02:54:57.700650 systemd-udevd[1055]: Using default interface naming scheme 'v252'. Feb 9 02:54:57.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.719000 audit: BPF prog-id=23 op=LOAD Feb 9 02:54:57.720095 systemd[1]: Started systemd-udevd.service. Feb 9 02:54:57.721470 systemd[1]: Starting systemd-networkd.service... Feb 9 02:54:57.730000 audit: BPF prog-id=24 op=LOAD Feb 9 02:54:57.730000 audit: BPF prog-id=25 op=LOAD Feb 9 02:54:57.730000 audit: BPF prog-id=26 op=LOAD Feb 9 02:54:57.732481 systemd[1]: Starting systemd-userdbd.service... Feb 9 02:54:57.754471 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 02:54:57.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.765780 systemd[1]: Started systemd-userdbd.service. Feb 9 02:54:57.795547 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 02:54:57.802545 kernel: ACPI: button: Power Button [PWRF] Feb 9 02:54:57.808426 systemd-networkd[1064]: lo: Link UP Feb 9 02:54:57.808431 systemd-networkd[1064]: lo: Gained carrier Feb 9 02:54:57.808913 systemd-networkd[1064]: Enumeration completed Feb 9 02:54:57.808978 systemd-networkd[1064]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 9 02:54:57.808988 systemd[1]: Started systemd-networkd.service. Feb 9 02:54:57.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.812042 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 02:54:57.812178 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 02:54:57.813282 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Feb 9 02:54:57.813644 systemd-networkd[1064]: ens192: Link UP Feb 9 02:54:57.813791 systemd-networkd[1064]: ens192: Gained carrier Feb 9 02:54:57.818549 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1059) Feb 9 02:54:57.837509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 02:54:57.869659 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Feb 9 02:54:57.869841 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 9 02:54:57.872543 kernel: Guest personality initialized and is active Feb 9 02:54:57.875673 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 9 02:54:57.875732 kernel: Initialized host personality Feb 9 02:54:57.871000 audit[1058]: AVC avc: denied { confidentiality } for pid=1058 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 02:54:57.871000 audit[1058]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a5d53b5fe0 a1=32194 a2=7f964ef5fbc5 a3=5 items=108 ppid=1055 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 02:54:57.871000 audit: CWD cwd="/" Feb 9 02:54:57.871000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=1 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=2 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=3 name=(null) inode=24818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=4 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=5 name=(null) inode=24819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=6 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=7 name=(null) inode=24820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=8 name=(null) inode=24820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=9 name=(null) inode=24821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=10 name=(null) inode=24820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=11 name=(null) inode=24822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=12 name=(null) inode=24820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=13 name=(null) inode=24823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=14 name=(null) inode=24820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=15 name=(null) inode=24824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=16 name=(null) inode=24820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=17 name=(null) inode=24825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=18 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=19 name=(null) inode=24826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=20 name=(null) inode=24826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=21 name=(null) inode=24827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=22 name=(null) inode=24826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=23 name=(null) inode=24828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=24 name=(null) inode=24826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=25 name=(null) inode=24829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=26 name=(null) inode=24826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=27 name=(null) inode=24830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=28 name=(null) inode=24826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=29 name=(null) inode=24831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=30 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=31 name=(null) inode=24832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=32 name=(null) inode=24832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=33 name=(null) inode=24833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=34 name=(null) inode=24832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=35 name=(null) inode=24834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=36 name=(null) inode=24832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=37 name=(null) inode=24835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=38 name=(null) inode=24832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=39 name=(null) inode=24836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=40 name=(null) inode=24832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=41 name=(null) inode=24837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=42 name=(null) inode=24817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=43 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=44 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=45 name=(null) inode=24839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=46 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=47 name=(null) inode=24840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=48 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=49 name=(null) inode=24841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=50 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=51 name=(null) inode=24842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=52 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=53 name=(null) inode=24843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=55 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=56 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=57 name=(null) inode=24845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=58 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=59 name=(null) inode=24846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=60 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=61 name=(null) inode=24847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=62 name=(null) inode=24847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=63 name=(null) inode=24848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=64 name=(null) inode=24847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=65 name=(null) inode=24849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=66 name=(null) inode=24847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=67 name=(null) inode=24850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=68 name=(null) inode=24847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=69 name=(null) inode=24851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=70 name=(null) inode=24847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=71 name=(null) inode=24852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=72 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=73 name=(null) inode=24853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=74 name=(null) inode=24853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=75 name=(null) inode=24854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=76 name=(null) inode=24853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=77 name=(null) inode=24855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=78 name=(null) inode=24853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=79 name=(null) inode=24856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=80 name=(null) inode=24853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=81 name=(null) inode=24857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=82 name=(null) inode=24853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=83 name=(null) inode=24858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=84 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=85 name=(null) inode=24859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=86 name=(null) inode=24859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=87 name=(null) inode=24860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=88 name=(null) inode=24859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=89 name=(null) inode=24861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=90 name=(null) inode=24859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=91 name=(null) inode=24862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=92 name=(null) inode=24859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=93 name=(null) inode=24863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=94 name=(null) inode=24859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=95 name=(null) inode=24864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=96 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=97 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=98 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=99 name=(null) inode=24866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=100 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=101 name=(null) inode=24867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=102 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=103 name=(null) inode=24868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=104 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=105 name=(null) inode=24869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=106 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PATH item=107 name=(null) inode=24870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 02:54:57.871000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 02:54:57.881549 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 9 02:54:57.900545 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 02:54:57.915972 (udev-worker)[1057]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 9 02:54:57.922547 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 02:54:57.938760 systemd[1]: Finished systemd-udev-settle.service. Feb 9 02:54:57.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.939807 systemd[1]: Starting lvm2-activation-early.service... Feb 9 02:54:57.956404 lvm[1088]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 02:54:57.976173 systemd[1]: Finished lvm2-activation-early.service. Feb 9 02:54:57.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.976358 systemd[1]: Reached target cryptsetup.target. Feb 9 02:54:57.977273 systemd[1]: Starting lvm2-activation.service... Feb 9 02:54:57.979972 lvm[1089]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 02:54:57.997100 systemd[1]: Finished lvm2-activation.service. Feb 9 02:54:57.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:57.997286 systemd[1]: Reached target local-fs-pre.target. Feb 9 02:54:57.997387 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 02:54:57.997406 systemd[1]: Reached target local-fs.target. Feb 9 02:54:57.997497 systemd[1]: Reached target machines.target. Feb 9 02:54:57.998519 systemd[1]: Starting ldconfig.service... Feb 9 02:54:58.001171 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 02:54:58.001212 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 02:54:58.002059 systemd[1]: Starting systemd-boot-update.service... Feb 9 02:54:58.002788 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 02:54:58.003658 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 02:54:58.003821 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 02:54:58.003853 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 02:54:58.004861 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 02:54:58.026902 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1091 (bootctl) Feb 9 02:54:58.027667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 02:54:58.046704 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 02:54:58.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.161113 systemd-tmpfiles[1094]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 02:54:58.295763 systemd-tmpfiles[1094]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 02:54:58.432438 systemd-tmpfiles[1094]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 02:54:58.799801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 02:54:58.800242 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 02:54:58.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.803156 systemd-fsck[1100]: fsck.fat 4.2 (2021-01-31) Feb 9 02:54:58.803156 systemd-fsck[1100]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 02:54:58.804315 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 02:54:58.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.805277 systemd[1]: Mounting boot.mount... Feb 9 02:54:58.816669 systemd[1]: Mounted boot.mount. Feb 9 02:54:58.829017 systemd[1]: Finished systemd-boot-update.service. Feb 9 02:54:58.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.883836 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 02:54:58.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.884952 systemd[1]: Starting audit-rules.service... Feb 9 02:54:58.885795 systemd[1]: Starting clean-ca-certificates.service... Feb 9 02:54:58.886714 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 02:54:58.886000 audit: BPF prog-id=27 op=LOAD Feb 9 02:54:58.888002 systemd[1]: Starting systemd-resolved.service... Feb 9 02:54:58.887000 audit: BPF prog-id=28 op=LOAD Feb 9 02:54:58.889080 systemd[1]: Starting systemd-timesyncd.service... Feb 9 02:54:58.889961 systemd[1]: Starting systemd-update-utmp.service... Feb 9 02:54:58.893590 systemd[1]: Finished clean-ca-certificates.service. Feb 9 02:54:58.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.893754 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 02:54:58.899000 audit[1108]: SYSTEM_BOOT pid=1108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.901790 systemd[1]: Finished systemd-update-utmp.service. Feb 9 02:54:58.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.924020 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 02:54:58.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 02:54:58.941897 augenrules[1123]: No rules Feb 9 02:54:58.940000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 02:54:58.940000 audit[1123]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2ee91e10 a2=420 a3=0 items=0 ppid=1103 pid=1123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 02:54:58.940000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 02:54:58.942065 systemd[1]: Finished audit-rules.service. Feb 9 02:54:58.947422 systemd[1]: Started systemd-timesyncd.service. Feb 9 02:54:58.947600 systemd[1]: Reached target time-set.target. Feb 9 02:54:58.948939 systemd-resolved[1106]: Positive Trust Anchors: Feb 9 02:54:58.948947 systemd-resolved[1106]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 02:54:58.948967 systemd-resolved[1106]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 02:54:58.972373 systemd-resolved[1106]: Defaulting to hostname 'linux'. Feb 9 02:54:58.973667 systemd[1]: Started systemd-resolved.service. Feb 9 02:54:58.973811 systemd[1]: Reached target network.target. Feb 9 02:54:58.973906 systemd[1]: Reached target nss-lookup.target. Feb 9 02:54:58.975679 systemd-networkd[1064]: ens192: Gained IPv6LL Feb 9 02:54:59.081691 ldconfig[1090]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 02:54:59.095475 systemd[1]: Finished ldconfig.service. Feb 9 02:54:59.096538 systemd[1]: Starting systemd-update-done.service... Feb 9 02:54:59.101254 systemd[1]: Finished systemd-update-done.service. Feb 9 02:54:59.101404 systemd[1]: Reached target sysinit.target. Feb 9 02:54:59.101546 systemd[1]: Started motdgen.path. Feb 9 02:54:59.101645 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 02:54:59.101818 systemd[1]: Started logrotate.timer. Feb 9 02:54:59.101940 systemd[1]: Started mdadm.timer. Feb 9 02:54:59.102021 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 02:54:59.102109 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 02:54:59.102129 systemd[1]: Reached target paths.target. Feb 9 02:54:59.102207 systemd[1]: Reached target timers.target. Feb 9 02:54:59.102425 systemd[1]: Listening on dbus.socket. Feb 9 02:54:59.103162 systemd[1]: Starting docker.socket... Feb 9 02:54:59.104915 systemd[1]: Listening on sshd.socket. Feb 9 02:54:59.105055 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 02:54:59.105294 systemd[1]: Listening on docker.socket. Feb 9 02:54:59.105420 systemd[1]: Reached target sockets.target. Feb 9 02:54:59.105507 systemd[1]: Reached target basic.target. Feb 9 02:54:59.105690 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 02:54:59.105706 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 02:54:59.106358 systemd[1]: Starting containerd.service... Feb 9 02:54:59.107278 systemd[1]: Starting dbus.service... Feb 9 02:54:59.108090 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 02:54:59.109050 systemd[1]: Starting extend-filesystems.service... Feb 9 02:54:59.110937 jq[1134]: false Feb 9 02:54:59.109636 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 02:54:59.110502 systemd[1]: Starting motdgen.service... Feb 9 02:54:59.112588 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 02:54:59.113423 systemd[1]: Starting prepare-critools.service... Feb 9 02:54:59.114398 systemd[1]: Starting prepare-helm.service... Feb 9 02:55:43.881029 systemd-resolved[1106]: Clock change detected. Flushing caches. Feb 9 02:55:43.881142 systemd-timesyncd[1107]: Contacted time server 137.190.2.4:123 (0.flatcar.pool.ntp.org). Feb 9 02:55:43.882174 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 02:55:43.882909 systemd-timesyncd[1107]: Initial clock synchronization to Fri 2024-02-09 02:55:43.881003 UTC. Feb 9 02:55:43.883668 systemd[1]: Starting sshd-keygen.service... Feb 9 02:55:43.886236 systemd[1]: Starting systemd-logind.service... Feb 9 02:55:43.886356 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 02:55:43.886393 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 02:55:43.886866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 02:55:43.906644 jq[1148]: true Feb 9 02:55:43.888014 systemd[1]: Starting update-engine.service... Feb 9 02:55:43.914456 tar[1152]: ./ Feb 9 02:55:43.914456 tar[1152]: ./macvlan Feb 9 02:55:43.888847 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 02:55:43.889968 systemd[1]: Starting vmtoolsd.service... Feb 9 02:55:43.914754 jq[1155]: true Feb 9 02:55:43.892336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 02:55:43.892463 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 02:55:43.917193 tar[1154]: linux-amd64/helm Feb 9 02:55:43.893577 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 02:55:43.919335 tar[1153]: crictl Feb 9 02:55:43.893671 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 02:55:43.908177 systemd[1]: Started vmtoolsd.service. Feb 9 02:55:43.940567 dbus-daemon[1133]: [system] SELinux support is enabled Feb 9 02:55:43.940662 systemd[1]: Started dbus.service. Feb 9 02:55:43.941993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 02:55:43.942014 systemd[1]: Reached target system-config.target. Feb 9 02:55:43.942127 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 02:55:43.942138 systemd[1]: Reached target user-config.target. Feb 9 02:55:43.954639 env[1156]: time="2024-02-09T02:55:43.954604057Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 02:55:43.957086 extend-filesystems[1135]: Found sda Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda1 Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda2 Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda3 Feb 9 02:55:43.957335 extend-filesystems[1135]: Found usr Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda4 Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda6 Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda7 Feb 9 02:55:43.957335 extend-filesystems[1135]: Found sda9 Feb 9 02:55:43.957335 extend-filesystems[1135]: Checking size of /dev/sda9 Feb 9 02:55:43.962185 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 02:55:43.962297 systemd[1]: Finished motdgen.service. Feb 9 02:55:43.978927 kernel: NET: Registered PF_VSOCK protocol family Feb 9 02:55:43.998163 extend-filesystems[1135]: Old size kept for /dev/sda9 Feb 9 02:55:43.998334 extend-filesystems[1135]: Found sr0 Feb 9 02:55:43.998485 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 02:55:43.998590 systemd[1]: Finished extend-filesystems.service. Feb 9 02:55:44.011154 bash[1187]: Updated "/home/core/.ssh/authorized_keys" Feb 9 02:55:44.011555 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 02:55:44.024310 env[1156]: time="2024-02-09T02:55:44.024285471Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 02:55:44.024450 tar[1152]: ./static Feb 9 02:55:44.025099 env[1156]: time="2024-02-09T02:55:44.024936386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 02:55:44.028009 env[1156]: time="2024-02-09T02:55:44.027990757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 02:55:44.028076 env[1156]: time="2024-02-09T02:55:44.028065217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 02:55:44.028240 env[1156]: time="2024-02-09T02:55:44.028226326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 02:55:44.028293 env[1156]: time="2024-02-09T02:55:44.028282712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 02:55:44.028343 env[1156]: time="2024-02-09T02:55:44.028331812Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 02:55:44.028392 env[1156]: time="2024-02-09T02:55:44.028376464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 02:55:44.028511 env[1156]: time="2024-02-09T02:55:44.028501296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 02:55:44.031014 env[1156]: time="2024-02-09T02:55:44.031001833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 02:55:44.031321 env[1156]: time="2024-02-09T02:55:44.031307329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 02:55:44.031416 systemd-logind[1145]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 02:55:44.031567 systemd-logind[1145]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 02:55:44.032420 env[1156]: time="2024-02-09T02:55:44.032403663Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 02:55:44.032531 env[1156]: time="2024-02-09T02:55:44.032518907Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 02:55:44.032598 env[1156]: time="2024-02-09T02:55:44.032584566Z" level=info msg="metadata content store policy set" policy=shared Feb 9 02:55:44.032691 systemd-logind[1145]: New seat seat0. Feb 9 02:55:44.039894 systemd[1]: Started systemd-logind.service. Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053744096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053778931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053787235Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053832321Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053844533Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053852306Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053859043Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053866476Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053900981Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053911682Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053932751Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.053951442Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.054033145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 02:55:44.055419 env[1156]: time="2024-02-09T02:55:44.054102320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054271499Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054297164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054311670Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054346873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054355194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054398428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054408444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054415245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054421870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054428414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054445611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054457469Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054541586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054551743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055699 env[1156]: time="2024-02-09T02:55:44.054558439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.055924 env[1156]: time="2024-02-09T02:55:44.054564502Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 02:55:44.055924 env[1156]: time="2024-02-09T02:55:44.054573660Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 02:55:44.055924 env[1156]: time="2024-02-09T02:55:44.054579912Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 02:55:44.055924 env[1156]: time="2024-02-09T02:55:44.054589953Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 02:55:44.055924 env[1156]: time="2024-02-09T02:55:44.054621258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 02:55:44.056021 env[1156]: time="2024-02-09T02:55:44.054758170Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 02:55:44.056021 env[1156]: time="2024-02-09T02:55:44.054793132Z" level=info msg="Connect containerd service" Feb 9 02:55:44.056021 env[1156]: time="2024-02-09T02:55:44.054810263Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 02:55:44.056021 env[1156]: time="2024-02-09T02:55:44.055165475Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.056574151Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.056597969Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.056633956Z" level=info msg="containerd successfully booted in 0.102529s" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.061876766Z" level=info msg="Start subscribing containerd event" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.061924816Z" level=info msg="Start recovering state" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.061969429Z" level=info msg="Start event monitor" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.061991987Z" level=info msg="Start snapshots syncer" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.062002982Z" level=info msg="Start cni network conf syncer for default" Feb 9 02:55:44.070558 env[1156]: time="2024-02-09T02:55:44.062010068Z" level=info msg="Start streaming server" Feb 9 02:55:44.056667 systemd[1]: Started containerd.service. Feb 9 02:55:44.092803 tar[1152]: ./vlan Feb 9 02:55:44.126614 update_engine[1146]: I0209 02:55:44.119830 1146 main.cc:92] Flatcar Update Engine starting Feb 9 02:55:44.134780 systemd[1]: Started update-engine.service. Feb 9 02:55:44.134892 update_engine[1146]: I0209 02:55:44.134789 1146 update_check_scheduler.cc:74] Next update check in 11m6s Feb 9 02:55:44.136258 systemd[1]: Started locksmithd.service. Feb 9 02:55:44.147805 tar[1152]: ./portmap Feb 9 02:55:44.194093 tar[1152]: ./host-local Feb 9 02:55:44.223508 tar[1152]: ./vrf Feb 9 02:55:44.285375 tar[1152]: ./bridge Feb 9 02:55:44.361832 tar[1152]: ./tuning Feb 9 02:55:44.426318 tar[1152]: ./firewall Feb 9 02:55:44.443114 tar[1154]: linux-amd64/LICENSE Feb 9 02:55:44.443180 tar[1154]: linux-amd64/README.md Feb 9 02:55:44.446340 systemd[1]: Finished prepare-helm.service. Feb 9 02:55:44.484843 tar[1152]: ./host-device Feb 9 02:55:44.534405 tar[1152]: ./sbr Feb 9 02:55:44.567451 tar[1152]: ./loopback Feb 9 02:55:44.586690 tar[1152]: ./dhcp Feb 9 02:55:44.613732 systemd[1]: Finished prepare-critools.service. Feb 9 02:55:44.643836 tar[1152]: ./ptp Feb 9 02:55:44.668519 tar[1152]: ./ipvlan Feb 9 02:55:44.693437 tar[1152]: ./bandwidth Feb 9 02:55:44.739880 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 02:55:44.844823 locksmithd[1203]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 02:55:44.991943 sshd_keygen[1168]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 02:55:45.004547 systemd[1]: Finished sshd-keygen.service. Feb 9 02:55:45.005919 systemd[1]: Starting issuegen.service... Feb 9 02:55:45.009060 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 02:55:45.009147 systemd[1]: Finished issuegen.service. Feb 9 02:55:45.010179 systemd[1]: Starting systemd-user-sessions.service... Feb 9 02:55:45.021150 systemd[1]: Finished systemd-user-sessions.service. Feb 9 02:55:45.022047 systemd[1]: Started getty@tty1.service. Feb 9 02:55:45.022855 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 02:55:45.023100 systemd[1]: Reached target getty.target. Feb 9 02:55:45.023243 systemd[1]: Reached target multi-user.target. Feb 9 02:55:45.024165 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 02:55:45.029068 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 02:55:45.029164 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 02:55:45.029346 systemd[1]: Startup finished in 878ms (kernel) + 3min 36.789s (initrd) + 4.929s (userspace) = 3min 42.597s. Feb 9 02:55:45.175140 login[1268]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 02:55:45.175679 login[1267]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 02:55:45.182456 systemd[1]: Created slice user-500.slice. Feb 9 02:55:45.183347 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 02:55:45.186658 systemd-logind[1145]: New session 2 of user core. Feb 9 02:55:45.189537 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 02:55:45.190505 systemd[1]: Starting user@500.service... Feb 9 02:55:45.192893 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:55:45.276239 systemd[1271]: Queued start job for default target default.target. Feb 9 02:55:45.276630 systemd[1271]: Reached target paths.target. Feb 9 02:55:45.276644 systemd[1271]: Reached target sockets.target. Feb 9 02:55:45.276652 systemd[1271]: Reached target timers.target. Feb 9 02:55:45.276659 systemd[1271]: Reached target basic.target. Feb 9 02:55:45.276723 systemd[1]: Started user@500.service. Feb 9 02:55:45.277567 systemd[1]: Started session-2.scope. Feb 9 02:55:45.278168 systemd[1271]: Reached target default.target. Feb 9 02:55:45.278286 systemd[1271]: Startup finished in 81ms. Feb 9 02:55:46.177108 login[1268]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 02:55:46.180710 systemd[1]: Started session-1.scope. Feb 9 02:55:46.180801 systemd-logind[1145]: New session 1 of user core. Feb 9 02:56:24.114417 systemd[1]: Created slice system-sshd.slice. Feb 9 02:56:24.115359 systemd[1]: Started sshd@0-139.178.70.99:22-147.75.109.163:37152.service. Feb 9 02:56:24.160076 sshd[1292]: Accepted publickey for core from 147.75.109.163 port 37152 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:56:24.160856 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:56:24.164038 systemd[1]: Started session-3.scope. Feb 9 02:56:24.164576 systemd-logind[1145]: New session 3 of user core. Feb 9 02:56:24.212267 systemd[1]: Started sshd@1-139.178.70.99:22-147.75.109.163:37164.service. Feb 9 02:56:24.253571 sshd[1297]: Accepted publickey for core from 147.75.109.163 port 37164 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:56:24.254590 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:56:24.256944 systemd-logind[1145]: New session 4 of user core. Feb 9 02:56:24.257615 systemd[1]: Started session-4.scope. Feb 9 02:56:24.312241 systemd[1]: Started sshd@2-139.178.70.99:22-147.75.109.163:37166.service. Feb 9 02:56:24.312631 sshd[1297]: pam_unix(sshd:session): session closed for user core Feb 9 02:56:24.315723 systemd[1]: sshd@1-139.178.70.99:22-147.75.109.163:37164.service: Deactivated successfully. Feb 9 02:56:24.316236 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 02:56:24.316729 systemd-logind[1145]: Session 4 logged out. Waiting for processes to exit. Feb 9 02:56:24.317759 systemd-logind[1145]: Removed session 4. Feb 9 02:56:24.345303 sshd[1302]: Accepted publickey for core from 147.75.109.163 port 37166 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:56:24.346333 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:56:24.349602 systemd[1]: Started session-5.scope. Feb 9 02:56:24.350601 systemd-logind[1145]: New session 5 of user core. Feb 9 02:56:24.399706 sshd[1302]: pam_unix(sshd:session): session closed for user core Feb 9 02:56:24.401816 systemd[1]: Started sshd@3-139.178.70.99:22-147.75.109.163:38476.service. Feb 9 02:56:24.402189 systemd[1]: sshd@2-139.178.70.99:22-147.75.109.163:37166.service: Deactivated successfully. Feb 9 02:56:24.402606 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 02:56:24.402987 systemd-logind[1145]: Session 5 logged out. Waiting for processes to exit. Feb 9 02:56:24.403622 systemd-logind[1145]: Removed session 5. Feb 9 02:56:24.433588 sshd[1308]: Accepted publickey for core from 147.75.109.163 port 38476 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:56:24.434296 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:56:24.436617 systemd-logind[1145]: New session 6 of user core. Feb 9 02:56:24.437061 systemd[1]: Started session-6.scope. Feb 9 02:56:24.487566 sshd[1308]: pam_unix(sshd:session): session closed for user core Feb 9 02:56:24.489435 systemd[1]: sshd@3-139.178.70.99:22-147.75.109.163:38476.service: Deactivated successfully. Feb 9 02:56:24.489759 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 02:56:24.490107 systemd-logind[1145]: Session 6 logged out. Waiting for processes to exit. Feb 9 02:56:24.490711 systemd[1]: Started sshd@4-139.178.70.99:22-147.75.109.163:38492.service. Feb 9 02:56:24.491418 systemd-logind[1145]: Removed session 6. Feb 9 02:56:24.521166 sshd[1315]: Accepted publickey for core from 147.75.109.163 port 38492 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:56:24.522018 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:56:24.524951 systemd-logind[1145]: New session 7 of user core. Feb 9 02:56:24.525627 systemd[1]: Started session-7.scope. Feb 9 02:56:24.584950 sudo[1318]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 02:56:24.585066 sudo[1318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 02:56:25.325691 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 02:56:25.329497 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 02:56:25.329676 systemd[1]: Reached target network-online.target. Feb 9 02:56:25.330570 systemd[1]: Starting docker.service... Feb 9 02:56:25.357305 env[1334]: time="2024-02-09T02:56:25.357275265Z" level=info msg="Starting up" Feb 9 02:56:25.358198 env[1334]: time="2024-02-09T02:56:25.358186351Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 02:56:25.358250 env[1334]: time="2024-02-09T02:56:25.358239769Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 02:56:25.358305 env[1334]: time="2024-02-09T02:56:25.358294343Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 02:56:25.358347 env[1334]: time="2024-02-09T02:56:25.358338022Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 02:56:25.359536 env[1334]: time="2024-02-09T02:56:25.359525279Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 02:56:25.359589 env[1334]: time="2024-02-09T02:56:25.359579349Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 02:56:25.359653 env[1334]: time="2024-02-09T02:56:25.359643154Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 02:56:25.359696 env[1334]: time="2024-02-09T02:56:25.359687634Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 02:56:25.364082 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1258339668-merged.mount: Deactivated successfully. Feb 9 02:56:25.382979 env[1334]: time="2024-02-09T02:56:25.382960679Z" level=info msg="Loading containers: start." Feb 9 02:56:25.459934 kernel: Initializing XFRM netlink socket Feb 9 02:56:25.481380 env[1334]: time="2024-02-09T02:56:25.481361053Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 02:56:25.517862 systemd-networkd[1064]: docker0: Link UP Feb 9 02:56:25.522063 env[1334]: time="2024-02-09T02:56:25.522044798Z" level=info msg="Loading containers: done." Feb 9 02:56:25.529376 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1614523024-merged.mount: Deactivated successfully. Feb 9 02:56:25.531990 env[1334]: time="2024-02-09T02:56:25.531971467Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 02:56:25.532179 env[1334]: time="2024-02-09T02:56:25.532167953Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 02:56:25.532269 env[1334]: time="2024-02-09T02:56:25.532260292Z" level=info msg="Daemon has completed initialization" Feb 9 02:56:25.540291 systemd[1]: Started docker.service. Feb 9 02:56:25.543114 env[1334]: time="2024-02-09T02:56:25.543082543Z" level=info msg="API listen on /run/docker.sock" Feb 9 02:56:25.553109 systemd[1]: Reloading. Feb 9 02:56:25.599445 /usr/lib/systemd/system-generators/torcx-generator[1470]: time="2024-02-09T02:56:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 02:56:25.599461 /usr/lib/systemd/system-generators/torcx-generator[1470]: time="2024-02-09T02:56:25Z" level=info msg="torcx already run" Feb 9 02:56:25.652160 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 02:56:25.652172 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 02:56:25.663409 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 02:56:25.715905 systemd[1]: Started kubelet.service. Feb 9 02:56:25.765312 kubelet[1530]: E0209 02:56:25.765266 1530 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 02:56:25.766675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 02:56:25.766750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 02:56:26.262585 env[1156]: time="2024-02-09T02:56:26.262409376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 02:56:26.864439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694160216.mount: Deactivated successfully. Feb 9 02:56:28.646491 env[1156]: time="2024-02-09T02:56:28.646445782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:28.647826 env[1156]: time="2024-02-09T02:56:28.647804801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:28.649427 env[1156]: time="2024-02-09T02:56:28.649406749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:28.651219 env[1156]: time="2024-02-09T02:56:28.651196581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:28.651819 env[1156]: time="2024-02-09T02:56:28.651791623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 02:56:28.660084 env[1156]: time="2024-02-09T02:56:28.660058625Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 02:56:29.436176 update_engine[1146]: I0209 02:56:29.435956 1146 update_attempter.cc:509] Updating boot flags... Feb 9 02:56:30.835622 env[1156]: time="2024-02-09T02:56:30.835576925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:30.842980 env[1156]: time="2024-02-09T02:56:30.842958435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:30.847838 env[1156]: time="2024-02-09T02:56:30.847820566Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:30.849326 env[1156]: time="2024-02-09T02:56:30.849301369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:30.850009 env[1156]: time="2024-02-09T02:56:30.849984452Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 02:56:30.858323 env[1156]: time="2024-02-09T02:56:30.858274160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 02:56:32.429762 env[1156]: time="2024-02-09T02:56:32.429726732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:32.430551 env[1156]: time="2024-02-09T02:56:32.430532044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:32.433059 env[1156]: time="2024-02-09T02:56:32.433038281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:32.433942 env[1156]: time="2024-02-09T02:56:32.433927476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:32.434427 env[1156]: time="2024-02-09T02:56:32.434413054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 02:56:32.440479 env[1156]: time="2024-02-09T02:56:32.440457939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 02:56:33.544039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount640634126.mount: Deactivated successfully. Feb 9 02:56:33.935962 env[1156]: time="2024-02-09T02:56:33.935813079Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:33.937046 env[1156]: time="2024-02-09T02:56:33.937031590Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:33.938673 env[1156]: time="2024-02-09T02:56:33.938658870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:33.939769 env[1156]: time="2024-02-09T02:56:33.939754418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 02:56:33.941939 env[1156]: time="2024-02-09T02:56:33.940713557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:33.947347 env[1156]: time="2024-02-09T02:56:33.947326095Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 02:56:34.433830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280425263.mount: Deactivated successfully. Feb 9 02:56:34.435900 env[1156]: time="2024-02-09T02:56:34.435880308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:34.436382 env[1156]: time="2024-02-09T02:56:34.436369318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:34.437169 env[1156]: time="2024-02-09T02:56:34.437153048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:34.437904 env[1156]: time="2024-02-09T02:56:34.437892047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:34.438265 env[1156]: time="2024-02-09T02:56:34.438251423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 02:56:34.443561 env[1156]: time="2024-02-09T02:56:34.443538299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 02:56:35.548232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372436631.mount: Deactivated successfully. Feb 9 02:56:36.011627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 02:56:36.011790 systemd[1]: Stopped kubelet.service. Feb 9 02:56:36.013233 systemd[1]: Started kubelet.service. Feb 9 02:56:36.051668 kubelet[1589]: E0209 02:56:36.051615 1589 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 02:56:36.054378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 02:56:36.054493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 02:56:38.979336 env[1156]: time="2024-02-09T02:56:38.979307500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:38.980422 env[1156]: time="2024-02-09T02:56:38.980408283Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:38.981251 env[1156]: time="2024-02-09T02:56:38.981234909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:38.982160 env[1156]: time="2024-02-09T02:56:38.982146089Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:38.982600 env[1156]: time="2024-02-09T02:56:38.982582729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 02:56:38.988599 env[1156]: time="2024-02-09T02:56:38.988580221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 02:56:39.665989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513017237.mount: Deactivated successfully. Feb 9 02:56:40.150184 env[1156]: time="2024-02-09T02:56:40.150128081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:40.150815 env[1156]: time="2024-02-09T02:56:40.150797981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:40.151590 env[1156]: time="2024-02-09T02:56:40.151575140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:40.152363 env[1156]: time="2024-02-09T02:56:40.152348407Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:40.152744 env[1156]: time="2024-02-09T02:56:40.152728135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 02:56:41.716423 systemd[1]: Stopped kubelet.service. Feb 9 02:56:41.726316 systemd[1]: Reloading. Feb 9 02:56:41.784290 /usr/lib/systemd/system-generators/torcx-generator[1677]: time="2024-02-09T02:56:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 02:56:41.784553 /usr/lib/systemd/system-generators/torcx-generator[1677]: time="2024-02-09T02:56:41Z" level=info msg="torcx already run" Feb 9 02:56:41.837173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 02:56:41.837187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 02:56:41.848222 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 02:56:41.906771 systemd[1]: Started kubelet.service. Feb 9 02:56:41.939906 kubelet[1738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 02:56:41.939906 kubelet[1738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 02:56:41.940209 kubelet[1738]: I0209 02:56:41.939931 1738 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 02:56:41.941409 kubelet[1738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 02:56:41.941409 kubelet[1738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 02:56:42.240811 kubelet[1738]: I0209 02:56:42.240791 1738 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 02:56:42.240811 kubelet[1738]: I0209 02:56:42.240810 1738 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 02:56:42.240976 kubelet[1738]: I0209 02:56:42.240964 1738 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 02:56:42.251774 kubelet[1738]: E0209 02:56:42.251761 1738 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.251890 kubelet[1738]: I0209 02:56:42.251878 1738 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 02:56:42.253880 kubelet[1738]: I0209 02:56:42.253866 1738 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 02:56:42.254058 kubelet[1738]: I0209 02:56:42.254044 1738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 02:56:42.254120 kubelet[1738]: I0209 02:56:42.254099 1738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 02:56:42.254199 kubelet[1738]: I0209 02:56:42.254130 1738 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 02:56:42.254199 kubelet[1738]: I0209 02:56:42.254141 1738 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 02:56:42.254679 kubelet[1738]: I0209 02:56:42.254663 1738 state_mem.go:36] "Initialized new in-memory state store" Feb 9 02:56:42.262221 kubelet[1738]: I0209 02:56:42.262211 1738 kubelet.go:398] "Attempting to sync node with API server" Feb 9 02:56:42.262297 kubelet[1738]: I0209 02:56:42.262285 1738 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 02:56:42.262363 kubelet[1738]: I0209 02:56:42.262355 1738 kubelet.go:297] "Adding apiserver pod source" Feb 9 02:56:42.262422 kubelet[1738]: I0209 02:56:42.262414 1738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 02:56:42.262924 kubelet[1738]: W0209 02:56:42.262888 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.262960 kubelet[1738]: E0209 02:56:42.262926 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.265632 kubelet[1738]: I0209 02:56:42.265612 1738 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 02:56:42.266671 kubelet[1738]: W0209 02:56:42.266658 1738 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 02:56:42.268860 kubelet[1738]: I0209 02:56:42.268846 1738 server.go:1186] "Started kubelet" Feb 9 02:56:42.271076 kubelet[1738]: I0209 02:56:42.271064 1738 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 02:56:42.272327 kubelet[1738]: I0209 02:56:42.272313 1738 server.go:451] "Adding debug handlers to kubelet server" Feb 9 02:56:42.274170 kubelet[1738]: E0209 02:56:42.274161 1738 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 02:56:42.274226 kubelet[1738]: E0209 02:56:42.274218 1738 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 02:56:42.275645 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 02:56:42.275726 kubelet[1738]: I0209 02:56:42.275714 1738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 02:56:42.275775 kubelet[1738]: E0209 02:56:42.274289 1738 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2125dace5a77f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 2, 56, 42, 268829567, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 2, 56, 42, 268829567, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.70.99:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.99:6443: connect: connection refused'(may retry after sleeping) Feb 9 02:56:42.276496 kubelet[1738]: W0209 02:56:42.276475 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.276553 kubelet[1738]: E0209 02:56:42.276545 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.278506 kubelet[1738]: I0209 02:56:42.278491 1738 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 02:56:42.279284 kubelet[1738]: I0209 02:56:42.279272 1738 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 02:56:42.281486 kubelet[1738]: W0209 02:56:42.281468 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.281554 kubelet[1738]: E0209 02:56:42.281546 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.281629 kubelet[1738]: E0209 02:56:42.281619 1738 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.301008 kubelet[1738]: I0209 02:56:42.300988 1738 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 02:56:42.310389 kubelet[1738]: I0209 02:56:42.310372 1738 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 02:56:42.310389 kubelet[1738]: I0209 02:56:42.310384 1738 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 02:56:42.310389 kubelet[1738]: I0209 02:56:42.310392 1738 state_mem.go:36] "Initialized new in-memory state store" Feb 9 02:56:42.311119 kubelet[1738]: I0209 02:56:42.311108 1738 policy_none.go:49] "None policy: Start" Feb 9 02:56:42.311375 kubelet[1738]: I0209 02:56:42.311363 1738 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 02:56:42.311416 kubelet[1738]: I0209 02:56:42.311376 1738 state_mem.go:35] "Initializing new in-memory state store" Feb 9 02:56:42.314585 systemd[1]: Created slice kubepods.slice. Feb 9 02:56:42.316837 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 02:56:42.319176 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 02:56:42.322368 kubelet[1738]: I0209 02:56:42.322359 1738 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 02:56:42.323693 kubelet[1738]: E0209 02:56:42.323592 1738 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 02:56:42.323799 kubelet[1738]: I0209 02:56:42.323793 1738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 02:56:42.326896 kubelet[1738]: I0209 02:56:42.326885 1738 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 02:56:42.326970 kubelet[1738]: I0209 02:56:42.326901 1738 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 02:56:42.326970 kubelet[1738]: I0209 02:56:42.326925 1738 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 02:56:42.326970 kubelet[1738]: E0209 02:56:42.326953 1738 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 02:56:42.327438 kubelet[1738]: W0209 02:56:42.327423 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.327500 kubelet[1738]: E0209 02:56:42.327447 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.379956 kubelet[1738]: I0209 02:56:42.379940 1738 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:42.380265 kubelet[1738]: E0209 02:56:42.380256 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 9 02:56:42.427468 kubelet[1738]: I0209 02:56:42.427443 1738 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:56:42.428344 kubelet[1738]: I0209 02:56:42.428334 1738 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:56:42.429099 kubelet[1738]: I0209 02:56:42.429090 1738 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:56:42.430507 kubelet[1738]: I0209 02:56:42.430498 1738 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 9 02:56:42.431572 kubelet[1738]: I0209 02:56:42.431510 1738 status_manager.go:698] "Failed to get status for pod" podUID=b07f05f31a5546916faca3d4c4b4612a pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 9 02:56:42.431611 kubelet[1738]: I0209 02:56:42.431587 1738 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 9 02:56:42.433076 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 02:56:42.441204 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 02:56:42.443828 systemd[1]: Created slice kubepods-burstable-podb07f05f31a5546916faca3d4c4b4612a.slice. Feb 9 02:56:42.482273 kubelet[1738]: E0209 02:56:42.482251 1738 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.580793 kubelet[1738]: I0209 02:56:42.580769 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:42.580946 kubelet[1738]: I0209 02:56:42.580939 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 02:56:42.581017 kubelet[1738]: I0209 02:56:42.581009 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b07f05f31a5546916faca3d4c4b4612a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07f05f31a5546916faca3d4c4b4612a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:42.581091 kubelet[1738]: I0209 02:56:42.581083 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:42.581182 kubelet[1738]: I0209 02:56:42.581154 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:42.581249 kubelet[1738]: I0209 02:56:42.581241 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:42.581311 kubelet[1738]: I0209 02:56:42.581303 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b07f05f31a5546916faca3d4c4b4612a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07f05f31a5546916faca3d4c4b4612a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:42.581373 kubelet[1738]: I0209 02:56:42.581360 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b07f05f31a5546916faca3d4c4b4612a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b07f05f31a5546916faca3d4c4b4612a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:42.581625 kubelet[1738]: I0209 02:56:42.581617 1738 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:42.581719 kubelet[1738]: I0209 02:56:42.581669 1738 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:42.581980 kubelet[1738]: E0209 02:56:42.581966 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 9 02:56:42.740424 env[1156]: time="2024-02-09T02:56:42.740155792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 02:56:42.743143 env[1156]: time="2024-02-09T02:56:42.743123904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 02:56:42.745548 env[1156]: time="2024-02-09T02:56:42.745529243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b07f05f31a5546916faca3d4c4b4612a,Namespace:kube-system,Attempt:0,}" Feb 9 02:56:42.883229 kubelet[1738]: E0209 02:56:42.882933 1738 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:42.983111 kubelet[1738]: I0209 02:56:42.983093 1738 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:42.983518 kubelet[1738]: E0209 02:56:42.983509 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 9 02:56:43.155458 kubelet[1738]: W0209 02:56:43.155288 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.155565 kubelet[1738]: E0209 02:56:43.155557 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.393299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941772162.mount: Deactivated successfully. Feb 9 02:56:43.395545 env[1156]: time="2024-02-09T02:56:43.395526100Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.396067 env[1156]: time="2024-02-09T02:56:43.396055794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.396519 env[1156]: time="2024-02-09T02:56:43.396504709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.397905 env[1156]: time="2024-02-09T02:56:43.397893204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.399458 env[1156]: time="2024-02-09T02:56:43.399441743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.401152 env[1156]: time="2024-02-09T02:56:43.401139162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.401779 env[1156]: time="2024-02-09T02:56:43.401765049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.404096 env[1156]: time="2024-02-09T02:56:43.404079479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.405008 env[1156]: time="2024-02-09T02:56:43.404993921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.406037 env[1156]: time="2024-02-09T02:56:43.405800871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.407059 env[1156]: time="2024-02-09T02:56:43.407044156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.407577 env[1156]: time="2024-02-09T02:56:43.407561295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:56:43.412437 kubelet[1738]: W0209 02:56:43.412217 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.412437 kubelet[1738]: E0209 02:56:43.412239 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.430235 env[1156]: time="2024-02-09T02:56:43.419474244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:56:43.430235 env[1156]: time="2024-02-09T02:56:43.419493310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:56:43.430235 env[1156]: time="2024-02-09T02:56:43.419500254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:56:43.430235 env[1156]: time="2024-02-09T02:56:43.419590850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59ab05e75c2c1b40102cc28dedc40dbabdaa87a6d23ab78f3d4047cc5cb4f56 pid=1825 runtime=io.containerd.runc.v2 Feb 9 02:56:43.430482 env[1156]: time="2024-02-09T02:56:43.417979371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:56:43.430482 env[1156]: time="2024-02-09T02:56:43.418010408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:56:43.430482 env[1156]: time="2024-02-09T02:56:43.418017668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:56:43.430482 env[1156]: time="2024-02-09T02:56:43.418089832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3aa9d81ae88c42109ba6adf2bb50f9a879314ba3c22f879d26dd925a5eceed99 pid=1820 runtime=io.containerd.runc.v2 Feb 9 02:56:43.446039 systemd[1]: Started cri-containerd-3aa9d81ae88c42109ba6adf2bb50f9a879314ba3c22f879d26dd925a5eceed99.scope. Feb 9 02:56:43.452875 env[1156]: time="2024-02-09T02:56:43.448499817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:56:43.452875 env[1156]: time="2024-02-09T02:56:43.448521541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:56:43.452875 env[1156]: time="2024-02-09T02:56:43.448532471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:56:43.452875 env[1156]: time="2024-02-09T02:56:43.448605719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62379bff8f5c979af152947fc7ef9e146080a0da7629b11b30254991aff8ef37 pid=1860 runtime=io.containerd.runc.v2 Feb 9 02:56:43.469329 systemd[1]: Started cri-containerd-c59ab05e75c2c1b40102cc28dedc40dbabdaa87a6d23ab78f3d4047cc5cb4f56.scope. Feb 9 02:56:43.479852 systemd[1]: Started cri-containerd-62379bff8f5c979af152947fc7ef9e146080a0da7629b11b30254991aff8ef37.scope. Feb 9 02:56:43.495041 env[1156]: time="2024-02-09T02:56:43.495014657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa9d81ae88c42109ba6adf2bb50f9a879314ba3c22f879d26dd925a5eceed99\"" Feb 9 02:56:43.497109 env[1156]: time="2024-02-09T02:56:43.497094668Z" level=info msg="CreateContainer within sandbox \"3aa9d81ae88c42109ba6adf2bb50f9a879314ba3c22f879d26dd925a5eceed99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 02:56:43.519986 env[1156]: time="2024-02-09T02:56:43.519961417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c59ab05e75c2c1b40102cc28dedc40dbabdaa87a6d23ab78f3d4047cc5cb4f56\"" Feb 9 02:56:43.527506 env[1156]: time="2024-02-09T02:56:43.527487528Z" level=info msg="CreateContainer within sandbox \"c59ab05e75c2c1b40102cc28dedc40dbabdaa87a6d23ab78f3d4047cc5cb4f56\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 02:56:43.533976 env[1156]: time="2024-02-09T02:56:43.533954687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b07f05f31a5546916faca3d4c4b4612a,Namespace:kube-system,Attempt:0,} returns sandbox id \"62379bff8f5c979af152947fc7ef9e146080a0da7629b11b30254991aff8ef37\"" Feb 9 02:56:43.535312 env[1156]: time="2024-02-09T02:56:43.535295085Z" level=info msg="CreateContainer within sandbox \"62379bff8f5c979af152947fc7ef9e146080a0da7629b11b30254991aff8ef37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 02:56:43.641026 kubelet[1738]: W0209 02:56:43.640992 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.641026 kubelet[1738]: E0209 02:56:43.641027 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.645270 kubelet[1738]: W0209 02:56:43.645250 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.645314 kubelet[1738]: E0209 02:56:43.645272 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.684236 kubelet[1738]: E0209 02:56:43.683727 1738 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:43.784687 kubelet[1738]: I0209 02:56:43.784669 1738 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:43.784849 kubelet[1738]: E0209 02:56:43.784834 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 9 02:56:44.053662 env[1156]: time="2024-02-09T02:56:44.053628707Z" level=info msg="CreateContainer within sandbox \"62379bff8f5c979af152947fc7ef9e146080a0da7629b11b30254991aff8ef37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0236c451a067c8c019b8624a0660c9953fd027193d2c0c0def8a0d9e172795c1\"" Feb 9 02:56:44.054054 env[1156]: time="2024-02-09T02:56:44.054038669Z" level=info msg="StartContainer for \"0236c451a067c8c019b8624a0660c9953fd027193d2c0c0def8a0d9e172795c1\"" Feb 9 02:56:44.063740 systemd[1]: Started cri-containerd-0236c451a067c8c019b8624a0660c9953fd027193d2c0c0def8a0d9e172795c1.scope. Feb 9 02:56:44.277332 kubelet[1738]: E0209 02:56:44.277312 1738 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:44.399297 env[1156]: time="2024-02-09T02:56:44.399105994Z" level=info msg="StartContainer for \"0236c451a067c8c019b8624a0660c9953fd027193d2c0c0def8a0d9e172795c1\" returns successfully" Feb 9 02:56:44.401431 env[1156]: time="2024-02-09T02:56:44.401409226Z" level=info msg="CreateContainer within sandbox \"c59ab05e75c2c1b40102cc28dedc40dbabdaa87a6d23ab78f3d4047cc5cb4f56\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54de6cf3e8d0c99c73498a30853129ae239ad9c8e7e6eb2ebadd852f558a2f8c\"" Feb 9 02:56:44.401628 env[1156]: time="2024-02-09T02:56:44.401615173Z" level=info msg="CreateContainer within sandbox \"3aa9d81ae88c42109ba6adf2bb50f9a879314ba3c22f879d26dd925a5eceed99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ac839886b21e44b009058095842d15d0eb18dba3240fbf741be44c6223f17e8\"" Feb 9 02:56:44.401795 env[1156]: time="2024-02-09T02:56:44.401634948Z" level=info msg="StartContainer for \"54de6cf3e8d0c99c73498a30853129ae239ad9c8e7e6eb2ebadd852f558a2f8c\"" Feb 9 02:56:44.402022 env[1156]: time="2024-02-09T02:56:44.401980230Z" level=info msg="StartContainer for \"5ac839886b21e44b009058095842d15d0eb18dba3240fbf741be44c6223f17e8\"" Feb 9 02:56:44.422250 systemd[1]: Started cri-containerd-54de6cf3e8d0c99c73498a30853129ae239ad9c8e7e6eb2ebadd852f558a2f8c.scope. Feb 9 02:56:44.428220 systemd[1]: Started cri-containerd-5ac839886b21e44b009058095842d15d0eb18dba3240fbf741be44c6223f17e8.scope. Feb 9 02:56:44.469420 env[1156]: time="2024-02-09T02:56:44.469387071Z" level=info msg="StartContainer for \"54de6cf3e8d0c99c73498a30853129ae239ad9c8e7e6eb2ebadd852f558a2f8c\" returns successfully" Feb 9 02:56:44.470325 env[1156]: time="2024-02-09T02:56:44.470311570Z" level=info msg="StartContainer for \"5ac839886b21e44b009058095842d15d0eb18dba3240fbf741be44c6223f17e8\" returns successfully" Feb 9 02:56:44.797412 kubelet[1738]: W0209 02:56:44.797346 1738 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:44.797412 kubelet[1738]: E0209 02:56:44.797389 1738 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:45.284208 kubelet[1738]: E0209 02:56:45.284168 1738 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 139.178.70.99:6443: connect: connection refused Feb 9 02:56:45.386262 systemd[1]: run-containerd-runc-k8s.io-54de6cf3e8d0c99c73498a30853129ae239ad9c8e7e6eb2ebadd852f558a2f8c-runc.ngv6To.mount: Deactivated successfully. Feb 9 02:56:45.387653 kubelet[1738]: I0209 02:56:45.387468 1738 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:45.387653 kubelet[1738]: E0209 02:56:45.387636 1738 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Feb 9 02:56:45.405936 kubelet[1738]: I0209 02:56:45.405908 1738 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 9 02:56:45.407683 kubelet[1738]: I0209 02:56:45.407673 1738 status_manager.go:698] "Failed to get status for pod" podUID=b07f05f31a5546916faca3d4c4b4612a pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 9 02:56:45.407829 kubelet[1738]: I0209 02:56:45.407821 1738 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.99:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.99:6443: connect: connection refused" Feb 9 02:56:47.465205 kubelet[1738]: E0209 02:56:47.465184 1738 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 9 02:56:48.487613 kubelet[1738]: E0209 02:56:48.487590 1738 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 02:56:48.589220 kubelet[1738]: I0209 02:56:48.589204 1738 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:48.867799 kubelet[1738]: I0209 02:56:48.867781 1738 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 02:56:48.872873 kubelet[1738]: E0209 02:56:48.872854 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:48.973371 kubelet[1738]: E0209 02:56:48.973346 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.074178 kubelet[1738]: E0209 02:56:49.074158 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.174758 kubelet[1738]: E0209 02:56:49.174572 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.275746 kubelet[1738]: E0209 02:56:49.275724 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.376352 kubelet[1738]: E0209 02:56:49.376329 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.477143 kubelet[1738]: E0209 02:56:49.477091 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.478016 systemd[1]: Reloading. Feb 9 02:56:49.535183 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2024-02-09T02:56:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 02:56:49.536907 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2024-02-09T02:56:49Z" level=info msg="torcx already run" Feb 9 02:56:49.577483 kubelet[1738]: E0209 02:56:49.577460 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.590488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 02:56:49.590589 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 02:56:49.602169 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 02:56:49.677984 kubelet[1738]: E0209 02:56:49.677959 1738 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 02:56:49.683992 kubelet[1738]: I0209 02:56:49.683974 1738 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 02:56:49.685366 systemd[1]: Stopping kubelet.service... Feb 9 02:56:49.697201 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 02:56:49.697420 systemd[1]: Stopped kubelet.service. Feb 9 02:56:49.699167 systemd[1]: Started kubelet.service. Feb 9 02:56:49.757686 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 02:56:49.757686 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 02:56:49.757686 kubelet[2124]: I0209 02:56:49.757379 2124 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 02:56:49.758197 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 02:56:49.758197 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 02:56:49.767399 kubelet[2124]: I0209 02:56:49.767378 2124 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 02:56:49.767399 kubelet[2124]: I0209 02:56:49.767395 2124 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 02:56:49.767533 kubelet[2124]: I0209 02:56:49.767521 2124 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 02:56:49.768246 kubelet[2124]: I0209 02:56:49.768235 2124 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 02:56:49.768623 kubelet[2124]: I0209 02:56:49.768615 2124 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 02:56:49.770956 kubelet[2124]: I0209 02:56:49.770945 2124 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 02:56:49.771069 kubelet[2124]: I0209 02:56:49.771059 2124 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 02:56:49.771113 kubelet[2124]: I0209 02:56:49.771101 2124 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 02:56:49.771179 kubelet[2124]: I0209 02:56:49.771120 2124 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 02:56:49.771179 kubelet[2124]: I0209 02:56:49.771128 2124 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 02:56:49.771179 kubelet[2124]: I0209 02:56:49.771150 2124 state_mem.go:36] "Initialized new in-memory state store" Feb 9 02:56:49.772969 kubelet[2124]: I0209 02:56:49.772960 2124 kubelet.go:398] "Attempting to sync node with API server" Feb 9 02:56:49.777506 kubelet[2124]: I0209 02:56:49.776943 2124 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 02:56:49.777506 kubelet[2124]: I0209 02:56:49.776963 2124 kubelet.go:297] "Adding apiserver pod source" Feb 9 02:56:49.777506 kubelet[2124]: I0209 02:56:49.776972 2124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 02:56:49.780378 kubelet[2124]: I0209 02:56:49.780365 2124 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 02:56:49.780668 kubelet[2124]: I0209 02:56:49.780656 2124 server.go:1186] "Started kubelet" Feb 9 02:56:49.782648 kubelet[2124]: I0209 02:56:49.782633 2124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 02:56:49.785100 kubelet[2124]: I0209 02:56:49.785087 2124 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 02:56:49.785701 kubelet[2124]: I0209 02:56:49.785692 2124 server.go:451] "Adding debug handlers to kubelet server" Feb 9 02:56:49.789835 kubelet[2124]: I0209 02:56:49.789817 2124 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 02:56:49.791065 kubelet[2124]: I0209 02:56:49.791050 2124 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 02:56:49.792269 kubelet[2124]: E0209 02:56:49.792246 2124 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 02:56:49.792314 kubelet[2124]: E0209 02:56:49.792272 2124 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 02:56:49.806751 kubelet[2124]: I0209 02:56:49.806733 2124 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 02:56:49.816628 kubelet[2124]: I0209 02:56:49.816618 2124 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 02:56:49.816722 kubelet[2124]: I0209 02:56:49.816715 2124 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 02:56:49.816773 kubelet[2124]: I0209 02:56:49.816766 2124 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 02:56:49.816848 kubelet[2124]: E0209 02:56:49.816842 2124 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 02:56:49.840974 kubelet[2124]: I0209 02:56:49.840954 2124 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 02:56:49.840974 kubelet[2124]: I0209 02:56:49.840968 2124 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 02:56:49.840974 kubelet[2124]: I0209 02:56:49.840977 2124 state_mem.go:36] "Initialized new in-memory state store" Feb 9 02:56:49.841230 kubelet[2124]: I0209 02:56:49.841220 2124 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 02:56:49.841265 kubelet[2124]: I0209 02:56:49.841235 2124 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 02:56:49.841265 kubelet[2124]: I0209 02:56:49.841245 2124 policy_none.go:49] "None policy: Start" Feb 9 02:56:49.841991 kubelet[2124]: I0209 02:56:49.841982 2124 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 02:56:49.842049 kubelet[2124]: I0209 02:56:49.842041 2124 state_mem.go:35] "Initializing new in-memory state store" Feb 9 02:56:49.842225 kubelet[2124]: I0209 02:56:49.842217 2124 state_mem.go:75] "Updated machine memory state" Feb 9 02:56:49.845160 kubelet[2124]: I0209 02:56:49.845145 2124 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 02:56:49.845299 kubelet[2124]: I0209 02:56:49.845283 2124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 02:56:49.890989 kubelet[2124]: I0209 02:56:49.890968 2124 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 02:56:49.905870 kubelet[2124]: I0209 02:56:49.905850 2124 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 02:56:49.905970 kubelet[2124]: I0209 02:56:49.905906 2124 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 02:56:49.917000 kubelet[2124]: I0209 02:56:49.916970 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:56:49.917080 kubelet[2124]: I0209 02:56:49.917021 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:56:49.917080 kubelet[2124]: I0209 02:56:49.917039 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:56:49.991903 sudo[2175]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 02:56:49.992086 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 02:56:49.992723 kubelet[2124]: I0209 02:56:49.992708 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 02:56:49.992820 kubelet[2124]: I0209 02:56:49.992810 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b07f05f31a5546916faca3d4c4b4612a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07f05f31a5546916faca3d4c4b4612a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:49.992909 kubelet[2124]: I0209 02:56:49.992899 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b07f05f31a5546916faca3d4c4b4612a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b07f05f31a5546916faca3d4c4b4612a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:49.993012 kubelet[2124]: I0209 02:56:49.993003 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:49.993314 kubelet[2124]: I0209 02:56:49.993302 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:49.993595 kubelet[2124]: I0209 02:56:49.993397 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:49.993595 kubelet[2124]: I0209 02:56:49.993429 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:49.993595 kubelet[2124]: I0209 02:56:49.993465 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:49.993595 kubelet[2124]: I0209 02:56:49.993488 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b07f05f31a5546916faca3d4c4b4612a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b07f05f31a5546916faca3d4c4b4612a\") " pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:50.615992 sudo[2175]: pam_unix(sudo:session): session closed for user root Feb 9 02:56:50.779423 kubelet[2124]: I0209 02:56:50.779388 2124 apiserver.go:52] "Watching apiserver" Feb 9 02:56:51.191605 kubelet[2124]: I0209 02:56:51.191574 2124 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 02:56:51.202955 kubelet[2124]: I0209 02:56:51.202925 2124 reconciler.go:41] "Reconciler: start to sync state" Feb 9 02:56:51.377997 kubelet[2124]: E0209 02:56:51.377971 2124 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 02:56:51.577331 kubelet[2124]: E0209 02:56:51.577312 2124 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 02:56:51.781405 kubelet[2124]: E0209 02:56:51.781371 2124 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 02:56:51.982185 sudo[1318]: pam_unix(sudo:session): session closed for user root Feb 9 02:56:51.983486 sshd[1315]: pam_unix(sshd:session): session closed for user core Feb 9 02:56:51.985494 systemd[1]: sshd@4-139.178.70.99:22-147.75.109.163:38492.service: Deactivated successfully. Feb 9 02:56:51.985979 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 02:56:51.986080 systemd[1]: session-7.scope: Consumed 2.456s CPU time. Feb 9 02:56:51.986299 systemd-logind[1145]: Session 7 logged out. Waiting for processes to exit. Feb 9 02:56:51.986824 systemd-logind[1145]: Removed session 7. Feb 9 02:56:52.588855 kubelet[2124]: I0209 02:56:52.588812 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5771118680000002 pod.CreationTimestamp="2024-02-09 02:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:56:52.179458771 +0000 UTC m=+2.475425241" watchObservedRunningTime="2024-02-09 02:56:52.577111868 +0000 UTC m=+2.873078330" Feb 9 02:56:52.976143 kubelet[2124]: I0209 02:56:52.975953 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.975909287 pod.CreationTimestamp="2024-02-09 02:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:56:52.58893284 +0000 UTC m=+2.884899297" watchObservedRunningTime="2024-02-09 02:56:52.975909287 +0000 UTC m=+3.271875749" Feb 9 02:56:53.377558 kubelet[2124]: I0209 02:56:53.377542 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.377507037 pod.CreationTimestamp="2024-02-09 02:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:56:52.976116166 +0000 UTC m=+3.272082635" watchObservedRunningTime="2024-02-09 02:56:53.377507037 +0000 UTC m=+3.673473501" Feb 9 02:57:04.151532 kubelet[2124]: I0209 02:57:04.151506 2124 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 02:57:04.152222 env[1156]: time="2024-02-09T02:57:04.152196136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 02:57:04.152580 kubelet[2124]: I0209 02:57:04.152565 2124 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 02:57:04.717361 kubelet[2124]: I0209 02:57:04.717336 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:57:04.721124 kubelet[2124]: I0209 02:57:04.721106 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:57:04.721524 systemd[1]: Created slice kubepods-besteffort-pod8c4d83d9_78a7_4de7_b888_f068b84e251c.slice. Feb 9 02:57:04.727498 kubelet[2124]: W0209 02:57:04.727464 2124 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.727629 kubelet[2124]: E0209 02:57:04.727618 2124 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.727704 kubelet[2124]: W0209 02:57:04.727464 2124 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.727758 kubelet[2124]: E0209 02:57:04.727751 2124 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.730727 systemd[1]: Created slice kubepods-burstable-pod81a377a2_7591_4973_bffb_c582258d3312.slice. Feb 9 02:57:04.735900 kubelet[2124]: W0209 02:57:04.735876 2124 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.735900 kubelet[2124]: E0209 02:57:04.735900 2124 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.736054 kubelet[2124]: W0209 02:57:04.736046 2124 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.736080 kubelet[2124]: E0209 02:57:04.736057 2124 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.746592 kubelet[2124]: W0209 02:57:04.746566 2124 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.746592 kubelet[2124]: E0209 02:57:04.746591 2124 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 02:57:04.788666 kubelet[2124]: I0209 02:57:04.788641 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c4d83d9-78a7-4de7-b888-f068b84e251c-kube-proxy\") pod \"kube-proxy-pqglc\" (UID: \"8c4d83d9-78a7-4de7-b888-f068b84e251c\") " pod="kube-system/kube-proxy-pqglc" Feb 9 02:57:04.788666 kubelet[2124]: I0209 02:57:04.788668 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-xtables-lock\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788800 kubelet[2124]: I0209 02:57:04.788681 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a377a2-7591-4973-bffb-c582258d3312-clustermesh-secrets\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788800 kubelet[2124]: I0209 02:57:04.788701 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgn5s\" (UniqueName: \"kubernetes.io/projected/8c4d83d9-78a7-4de7-b888-f068b84e251c-kube-api-access-tgn5s\") pod \"kube-proxy-pqglc\" (UID: \"8c4d83d9-78a7-4de7-b888-f068b84e251c\") " pod="kube-system/kube-proxy-pqglc" Feb 9 02:57:04.788800 kubelet[2124]: I0209 02:57:04.788720 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-run\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788800 kubelet[2124]: I0209 02:57:04.788736 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cni-path\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788800 kubelet[2124]: I0209 02:57:04.788750 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-lib-modules\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788800 kubelet[2124]: I0209 02:57:04.788762 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c4d83d9-78a7-4de7-b888-f068b84e251c-xtables-lock\") pod \"kube-proxy-pqglc\" (UID: \"8c4d83d9-78a7-4de7-b888-f068b84e251c\") " pod="kube-system/kube-proxy-pqglc" Feb 9 02:57:04.788966 kubelet[2124]: I0209 02:57:04.788798 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-hostproc\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788966 kubelet[2124]: I0209 02:57:04.788813 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-kernel\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788966 kubelet[2124]: I0209 02:57:04.788824 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-hubble-tls\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788966 kubelet[2124]: I0209 02:57:04.788835 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85n4\" (UniqueName: \"kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-kube-api-access-q85n4\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788966 kubelet[2124]: I0209 02:57:04.788865 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-cgroup\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.788966 kubelet[2124]: I0209 02:57:04.788882 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-bpf-maps\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.789109 kubelet[2124]: I0209 02:57:04.788896 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a377a2-7591-4973-bffb-c582258d3312-cilium-config-path\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.789109 kubelet[2124]: I0209 02:57:04.788908 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-net\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:04.789109 kubelet[2124]: I0209 02:57:04.788935 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c4d83d9-78a7-4de7-b888-f068b84e251c-lib-modules\") pod \"kube-proxy-pqglc\" (UID: \"8c4d83d9-78a7-4de7-b888-f068b84e251c\") " pod="kube-system/kube-proxy-pqglc" Feb 9 02:57:04.789109 kubelet[2124]: I0209 02:57:04.788948 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-etc-cni-netd\") pod \"cilium-c8zn2\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " pod="kube-system/cilium-c8zn2" Feb 9 02:57:05.053679 kubelet[2124]: I0209 02:57:05.053652 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:57:05.056788 systemd[1]: Created slice kubepods-besteffort-podf98f1c17_4764_4037_a6c8_85cccbdd19a0.slice. Feb 9 02:57:05.091792 kubelet[2124]: I0209 02:57:05.091727 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdtx\" (UniqueName: \"kubernetes.io/projected/f98f1c17-4764-4037-a6c8-85cccbdd19a0-kube-api-access-cqdtx\") pod \"cilium-operator-f59cbd8c6-n82qj\" (UID: \"f98f1c17-4764-4037-a6c8-85cccbdd19a0\") " pod="kube-system/cilium-operator-f59cbd8c6-n82qj" Feb 9 02:57:05.091792 kubelet[2124]: I0209 02:57:05.091777 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98f1c17-4764-4037-a6c8-85cccbdd19a0-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-n82qj\" (UID: \"f98f1c17-4764-4037-a6c8-85cccbdd19a0\") " pod="kube-system/cilium-operator-f59cbd8c6-n82qj" Feb 9 02:57:05.891950 kubelet[2124]: E0209 02:57:05.891924 2124 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 02:57:05.892295 kubelet[2124]: E0209 02:57:05.892283 2124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81a377a2-7591-4973-bffb-c582258d3312-cilium-config-path podName:81a377a2-7591-4973-bffb-c582258d3312 nodeName:}" failed. No retries permitted until 2024-02-09 02:57:06.392259662 +0000 UTC m=+16.688226122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/81a377a2-7591-4973-bffb-c582258d3312-cilium-config-path") pod "cilium-c8zn2" (UID: "81a377a2-7591-4973-bffb-c582258d3312") : failed to sync configmap cache: timed out waiting for the condition Feb 9 02:57:05.892575 kubelet[2124]: E0209 02:57:05.891927 2124 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 9 02:57:05.892712 kubelet[2124]: E0209 02:57:05.892703 2124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c4d83d9-78a7-4de7-b888-f068b84e251c-kube-proxy podName:8c4d83d9-78a7-4de7-b888-f068b84e251c nodeName:}" failed. No retries permitted until 2024-02-09 02:57:06.392693672 +0000 UTC m=+16.688660136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8c4d83d9-78a7-4de7-b888-f068b84e251c-kube-proxy") pod "kube-proxy-pqglc" (UID: "8c4d83d9-78a7-4de7-b888-f068b84e251c") : failed to sync configmap cache: timed out waiting for the condition Feb 9 02:57:05.896898 kubelet[2124]: E0209 02:57:05.896872 2124 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 9 02:57:05.897016 kubelet[2124]: E0209 02:57:05.896938 2124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81a377a2-7591-4973-bffb-c582258d3312-clustermesh-secrets podName:81a377a2-7591-4973-bffb-c582258d3312 nodeName:}" failed. No retries permitted until 2024-02-09 02:57:06.396924272 +0000 UTC m=+16.692890736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/81a377a2-7591-4973-bffb-c582258d3312-clustermesh-secrets") pod "cilium-c8zn2" (UID: "81a377a2-7591-4973-bffb-c582258d3312") : failed to sync secret cache: timed out waiting for the condition Feb 9 02:57:05.901654 kubelet[2124]: E0209 02:57:05.901633 2124 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 02:57:05.901654 kubelet[2124]: E0209 02:57:05.901650 2124 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-c8zn2: failed to sync secret cache: timed out waiting for the condition Feb 9 02:57:05.901772 kubelet[2124]: E0209 02:57:05.901707 2124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-hubble-tls podName:81a377a2-7591-4973-bffb-c582258d3312 nodeName:}" failed. No retries permitted until 2024-02-09 02:57:06.401693862 +0000 UTC m=+16.697660326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-hubble-tls") pod "cilium-c8zn2" (UID: "81a377a2-7591-4973-bffb-c582258d3312") : failed to sync secret cache: timed out waiting for the condition Feb 9 02:57:05.965720 env[1156]: time="2024-02-09T02:57:05.965686610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-n82qj,Uid:f98f1c17-4764-4037-a6c8-85cccbdd19a0,Namespace:kube-system,Attempt:0,}" Feb 9 02:57:06.027375 env[1156]: time="2024-02-09T02:57:06.027323811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:57:06.027487 env[1156]: time="2024-02-09T02:57:06.027383629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:57:06.027487 env[1156]: time="2024-02-09T02:57:06.027406734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:57:06.027644 env[1156]: time="2024-02-09T02:57:06.027604764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc pid=2224 runtime=io.containerd.runc.v2 Feb 9 02:57:06.041367 systemd[1]: Started cri-containerd-91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc.scope. Feb 9 02:57:06.073062 env[1156]: time="2024-02-09T02:57:06.073031775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-n82qj,Uid:f98f1c17-4764-4037-a6c8-85cccbdd19a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc\"" Feb 9 02:57:06.081515 env[1156]: time="2024-02-09T02:57:06.081494320Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 02:57:06.534070 env[1156]: time="2024-02-09T02:57:06.534031598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8zn2,Uid:81a377a2-7591-4973-bffb-c582258d3312,Namespace:kube-system,Attempt:0,}" Feb 9 02:57:06.542614 env[1156]: time="2024-02-09T02:57:06.542557895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:57:06.542614 env[1156]: time="2024-02-09T02:57:06.542591725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:57:06.542903 env[1156]: time="2024-02-09T02:57:06.542613681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:57:06.542903 env[1156]: time="2024-02-09T02:57:06.542858725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab pid=2265 runtime=io.containerd.runc.v2 Feb 9 02:57:06.552888 systemd[1]: Started cri-containerd-454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab.scope. Feb 9 02:57:06.575375 env[1156]: time="2024-02-09T02:57:06.575346126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c8zn2,Uid:81a377a2-7591-4973-bffb-c582258d3312,Namespace:kube-system,Attempt:0,} returns sandbox id \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\"" Feb 9 02:57:06.830206 env[1156]: time="2024-02-09T02:57:06.829893512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqglc,Uid:8c4d83d9-78a7-4de7-b888-f068b84e251c,Namespace:kube-system,Attempt:0,}" Feb 9 02:57:06.883874 env[1156]: time="2024-02-09T02:57:06.883820291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:57:06.883874 env[1156]: time="2024-02-09T02:57:06.883854271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:57:06.884048 env[1156]: time="2024-02-09T02:57:06.884016712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:57:06.884393 env[1156]: time="2024-02-09T02:57:06.884290407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4999c20b218d9610b2792aa51de5fc29b4edd449ed031b4811bb2ed83793e7ff pid=2308 runtime=io.containerd.runc.v2 Feb 9 02:57:06.898675 systemd[1]: Started cri-containerd-4999c20b218d9610b2792aa51de5fc29b4edd449ed031b4811bb2ed83793e7ff.scope. Feb 9 02:57:06.899638 systemd[1]: run-containerd-runc-k8s.io-4999c20b218d9610b2792aa51de5fc29b4edd449ed031b4811bb2ed83793e7ff-runc.PdMPTE.mount: Deactivated successfully. Feb 9 02:57:06.918869 env[1156]: time="2024-02-09T02:57:06.918843137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqglc,Uid:8c4d83d9-78a7-4de7-b888-f068b84e251c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4999c20b218d9610b2792aa51de5fc29b4edd449ed031b4811bb2ed83793e7ff\"" Feb 9 02:57:06.922134 env[1156]: time="2024-02-09T02:57:06.922106380Z" level=info msg="CreateContainer within sandbox \"4999c20b218d9610b2792aa51de5fc29b4edd449ed031b4811bb2ed83793e7ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 02:57:06.944552 env[1156]: time="2024-02-09T02:57:06.944518042Z" level=info msg="CreateContainer within sandbox \"4999c20b218d9610b2792aa51de5fc29b4edd449ed031b4811bb2ed83793e7ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba9918a509c78cad09561767e2caa0089ac1cad734bf236e8e4e734b55bd19ca\"" Feb 9 02:57:06.946286 env[1156]: time="2024-02-09T02:57:06.946259792Z" level=info msg="StartContainer for \"ba9918a509c78cad09561767e2caa0089ac1cad734bf236e8e4e734b55bd19ca\"" Feb 9 02:57:06.958163 systemd[1]: Started cri-containerd-ba9918a509c78cad09561767e2caa0089ac1cad734bf236e8e4e734b55bd19ca.scope. Feb 9 02:57:06.986715 env[1156]: time="2024-02-09T02:57:06.986683482Z" level=info msg="StartContainer for \"ba9918a509c78cad09561767e2caa0089ac1cad734bf236e8e4e734b55bd19ca\" returns successfully" Feb 9 02:57:07.855595 kubelet[2124]: I0209 02:57:07.855569 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pqglc" podStartSLOduration=3.855540364 pod.CreationTimestamp="2024-02-09 02:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:57:07.85498719 +0000 UTC m=+18.150953659" watchObservedRunningTime="2024-02-09 02:57:07.855540364 +0000 UTC m=+18.151506828" Feb 9 02:57:08.056413 env[1156]: time="2024-02-09T02:57:08.056388704Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:57:08.057933 env[1156]: time="2024-02-09T02:57:08.057421944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:57:08.058804 env[1156]: time="2024-02-09T02:57:08.058123436Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 02:57:08.058804 env[1156]: time="2024-02-09T02:57:08.058499033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:57:08.060089 env[1156]: time="2024-02-09T02:57:08.060069974Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 02:57:08.061312 env[1156]: time="2024-02-09T02:57:08.061294433Z" level=info msg="CreateContainer within sandbox \"91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 02:57:08.067648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009579752.mount: Deactivated successfully. Feb 9 02:57:08.070979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291210570.mount: Deactivated successfully. Feb 9 02:57:08.073987 env[1156]: time="2024-02-09T02:57:08.073956314Z" level=info msg="CreateContainer within sandbox \"91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\"" Feb 9 02:57:08.074363 env[1156]: time="2024-02-09T02:57:08.074345807Z" level=info msg="StartContainer for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\"" Feb 9 02:57:08.086024 systemd[1]: Started cri-containerd-11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24.scope. Feb 9 02:57:08.126802 env[1156]: time="2024-02-09T02:57:08.126740651Z" level=info msg="StartContainer for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" returns successfully" Feb 9 02:57:09.855636 kubelet[2124]: I0209 02:57:09.855428 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-n82qj" podStartSLOduration=-9.22337203199937e+09 pod.CreationTimestamp="2024-02-09 02:57:05 +0000 UTC" firstStartedPulling="2024-02-09 02:57:06.073941506 +0000 UTC m=+16.369907964" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:57:08.857946438 +0000 UTC m=+19.153912909" watchObservedRunningTime="2024-02-09 02:57:09.855404327 +0000 UTC m=+20.151370795" Feb 9 02:57:12.314695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870507217.mount: Deactivated successfully. Feb 9 02:57:14.848265 env[1156]: time="2024-02-09T02:57:14.848199420Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:57:14.851412 env[1156]: time="2024-02-09T02:57:14.851386220Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:57:14.853202 env[1156]: time="2024-02-09T02:57:14.853171256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 02:57:14.853747 env[1156]: time="2024-02-09T02:57:14.853721220Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 02:57:14.859503 env[1156]: time="2024-02-09T02:57:14.859260177Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 02:57:14.866723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873121024.mount: Deactivated successfully. Feb 9 02:57:14.871134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799345433.mount: Deactivated successfully. Feb 9 02:57:14.873464 env[1156]: time="2024-02-09T02:57:14.873414201Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\"" Feb 9 02:57:14.874094 env[1156]: time="2024-02-09T02:57:14.874057431Z" level=info msg="StartContainer for \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\"" Feb 9 02:57:14.887757 systemd[1]: Started cri-containerd-dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae.scope. Feb 9 02:57:14.907194 env[1156]: time="2024-02-09T02:57:14.906990927Z" level=info msg="StartContainer for \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\" returns successfully" Feb 9 02:57:14.912454 systemd[1]: cri-containerd-dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae.scope: Deactivated successfully. Feb 9 02:57:15.054356 env[1156]: time="2024-02-09T02:57:15.054321038Z" level=info msg="shim disconnected" id=dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae Feb 9 02:57:15.054356 env[1156]: time="2024-02-09T02:57:15.054354273Z" level=warning msg="cleaning up after shim disconnected" id=dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae namespace=k8s.io Feb 9 02:57:15.054356 env[1156]: time="2024-02-09T02:57:15.054360621Z" level=info msg="cleaning up dead shim" Feb 9 02:57:15.059287 env[1156]: time="2024-02-09T02:57:15.059261173Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:57:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2570 runtime=io.containerd.runc.v2\n" Feb 9 02:57:15.864164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae-rootfs.mount: Deactivated successfully. Feb 9 02:57:15.865775 env[1156]: time="2024-02-09T02:57:15.865715621Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 02:57:15.873045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714495855.mount: Deactivated successfully. Feb 9 02:57:15.882067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457288835.mount: Deactivated successfully. Feb 9 02:57:15.886035 env[1156]: time="2024-02-09T02:57:15.886002430Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\"" Feb 9 02:57:15.887334 env[1156]: time="2024-02-09T02:57:15.887064483Z" level=info msg="StartContainer for \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\"" Feb 9 02:57:15.897503 systemd[1]: Started cri-containerd-6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6.scope. Feb 9 02:57:15.914779 env[1156]: time="2024-02-09T02:57:15.914750520Z" level=info msg="StartContainer for \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\" returns successfully" Feb 9 02:57:15.923869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 02:57:15.924064 systemd[1]: Stopped systemd-sysctl.service. Feb 9 02:57:15.924427 systemd[1]: Stopping systemd-sysctl.service... Feb 9 02:57:15.925617 systemd[1]: Starting systemd-sysctl.service... Feb 9 02:57:15.926571 systemd[1]: cri-containerd-6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6.scope: Deactivated successfully. Feb 9 02:57:15.942440 env[1156]: time="2024-02-09T02:57:15.942405800Z" level=info msg="shim disconnected" id=6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6 Feb 9 02:57:15.942581 env[1156]: time="2024-02-09T02:57:15.942568717Z" level=warning msg="cleaning up after shim disconnected" id=6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6 namespace=k8s.io Feb 9 02:57:15.942645 env[1156]: time="2024-02-09T02:57:15.942635217Z" level=info msg="cleaning up dead shim" Feb 9 02:57:15.943368 systemd[1]: Finished systemd-sysctl.service. Feb 9 02:57:15.948882 env[1156]: time="2024-02-09T02:57:15.948857331Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:57:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2632 runtime=io.containerd.runc.v2\n" Feb 9 02:57:16.872461 env[1156]: time="2024-02-09T02:57:16.871719173Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 02:57:16.879831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173804548.mount: Deactivated successfully. Feb 9 02:57:16.882558 env[1156]: time="2024-02-09T02:57:16.882530409Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\"" Feb 9 02:57:16.883138 env[1156]: time="2024-02-09T02:57:16.883124358Z" level=info msg="StartContainer for \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\"" Feb 9 02:57:16.896336 systemd[1]: Started cri-containerd-0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e.scope. Feb 9 02:57:16.917186 env[1156]: time="2024-02-09T02:57:16.917156561Z" level=info msg="StartContainer for \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\" returns successfully" Feb 9 02:57:16.922720 systemd[1]: cri-containerd-0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e.scope: Deactivated successfully. Feb 9 02:57:16.937156 env[1156]: time="2024-02-09T02:57:16.937127135Z" level=info msg="shim disconnected" id=0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e Feb 9 02:57:16.937492 env[1156]: time="2024-02-09T02:57:16.937289930Z" level=warning msg="cleaning up after shim disconnected" id=0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e namespace=k8s.io Feb 9 02:57:16.937492 env[1156]: time="2024-02-09T02:57:16.937299478Z" level=info msg="cleaning up dead shim" Feb 9 02:57:16.941504 env[1156]: time="2024-02-09T02:57:16.941485266Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:57:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2688 runtime=io.containerd.runc.v2\n" Feb 9 02:57:17.864032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e-rootfs.mount: Deactivated successfully. Feb 9 02:57:17.869570 env[1156]: time="2024-02-09T02:57:17.869480244Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 02:57:17.880671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838654279.mount: Deactivated successfully. Feb 9 02:57:17.883195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106004519.mount: Deactivated successfully. Feb 9 02:57:17.884585 env[1156]: time="2024-02-09T02:57:17.884564156Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\"" Feb 9 02:57:17.888399 env[1156]: time="2024-02-09T02:57:17.888375802Z" level=info msg="StartContainer for \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\"" Feb 9 02:57:17.897898 systemd[1]: Started cri-containerd-307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff.scope. Feb 9 02:57:17.914956 systemd[1]: cri-containerd-307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff.scope: Deactivated successfully. Feb 9 02:57:17.916664 env[1156]: time="2024-02-09T02:57:17.916644351Z" level=info msg="StartContainer for \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\" returns successfully" Feb 9 02:57:17.919010 env[1156]: time="2024-02-09T02:57:17.918957147Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81a377a2_7591_4973_bffb_c582258d3312.slice/cri-containerd-307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff.scope/memory.events\": no such file or directory" Feb 9 02:57:17.935386 env[1156]: time="2024-02-09T02:57:17.935339885Z" level=info msg="shim disconnected" id=307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff Feb 9 02:57:17.935386 env[1156]: time="2024-02-09T02:57:17.935384937Z" level=warning msg="cleaning up after shim disconnected" id=307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff namespace=k8s.io Feb 9 02:57:17.935528 env[1156]: time="2024-02-09T02:57:17.935392169Z" level=info msg="cleaning up dead shim" Feb 9 02:57:17.940204 env[1156]: time="2024-02-09T02:57:17.940185925Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:57:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2743 runtime=io.containerd.runc.v2\n" Feb 9 02:57:18.874471 env[1156]: time="2024-02-09T02:57:18.874439603Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 02:57:18.884410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187528449.mount: Deactivated successfully. Feb 9 02:57:18.887144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088145942.mount: Deactivated successfully. Feb 9 02:57:18.888280 env[1156]: time="2024-02-09T02:57:18.888258379Z" level=info msg="CreateContainer within sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\"" Feb 9 02:57:18.889660 env[1156]: time="2024-02-09T02:57:18.889443705Z" level=info msg="StartContainer for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\"" Feb 9 02:57:18.900308 systemd[1]: Started cri-containerd-2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc.scope. Feb 9 02:57:18.924512 env[1156]: time="2024-02-09T02:57:18.924485551Z" level=info msg="StartContainer for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" returns successfully" Feb 9 02:57:19.022707 kubelet[2124]: I0209 02:57:19.022349 2124 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 02:57:19.088714 kubelet[2124]: I0209 02:57:19.088685 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:57:19.092038 systemd[1]: Created slice kubepods-burstable-podebdd308a_803a_4c92_8529_e44e94187982.slice. Feb 9 02:57:19.093523 kubelet[2124]: I0209 02:57:19.093508 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:57:19.096759 systemd[1]: Created slice kubepods-burstable-podfca5a717_bbae_4fd5_afe8_d98910430e9e.slice. Feb 9 02:57:19.176453 kubelet[2124]: I0209 02:57:19.176391 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fca5a717-bbae-4fd5-afe8-d98910430e9e-config-volume\") pod \"coredns-787d4945fb-nqxcx\" (UID: \"fca5a717-bbae-4fd5-afe8-d98910430e9e\") " pod="kube-system/coredns-787d4945fb-nqxcx" Feb 9 02:57:19.176570 kubelet[2124]: I0209 02:57:19.176562 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebdd308a-803a-4c92-8529-e44e94187982-config-volume\") pod \"coredns-787d4945fb-s98gn\" (UID: \"ebdd308a-803a-4c92-8529-e44e94187982\") " pod="kube-system/coredns-787d4945fb-s98gn" Feb 9 02:57:19.176697 kubelet[2124]: I0209 02:57:19.176689 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twlt\" (UniqueName: \"kubernetes.io/projected/fca5a717-bbae-4fd5-afe8-d98910430e9e-kube-api-access-8twlt\") pod \"coredns-787d4945fb-nqxcx\" (UID: \"fca5a717-bbae-4fd5-afe8-d98910430e9e\") " pod="kube-system/coredns-787d4945fb-nqxcx" Feb 9 02:57:19.176780 kubelet[2124]: I0209 02:57:19.176772 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lbd\" (UniqueName: \"kubernetes.io/projected/ebdd308a-803a-4c92-8529-e44e94187982-kube-api-access-94lbd\") pod \"coredns-787d4945fb-s98gn\" (UID: \"ebdd308a-803a-4c92-8529-e44e94187982\") " pod="kube-system/coredns-787d4945fb-s98gn" Feb 9 02:57:19.239933 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 02:57:19.395372 env[1156]: time="2024-02-09T02:57:19.395346219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-s98gn,Uid:ebdd308a-803a-4c92-8529-e44e94187982,Namespace:kube-system,Attempt:0,}" Feb 9 02:57:19.399793 env[1156]: time="2024-02-09T02:57:19.399769500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nqxcx,Uid:fca5a717-bbae-4fd5-afe8-d98910430e9e,Namespace:kube-system,Attempt:0,}" Feb 9 02:57:19.479936 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 02:57:19.885836 kubelet[2124]: I0209 02:57:19.885816 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c8zn2" podStartSLOduration=-9.22337202096898e+09 pod.CreationTimestamp="2024-02-09 02:57:04 +0000 UTC" firstStartedPulling="2024-02-09 02:57:06.576183164 +0000 UTC m=+16.872149624" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:57:19.879133544 +0000 UTC m=+30.175100012" watchObservedRunningTime="2024-02-09 02:57:19.885794272 +0000 UTC m=+30.181760735" Feb 9 02:57:47.172819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 02:57:47.172934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 02:57:47.171112 systemd-networkd[1064]: cilium_host: Link UP Feb 9 02:57:47.171240 systemd-networkd[1064]: cilium_net: Link UP Feb 9 02:57:47.171801 systemd-networkd[1064]: cilium_net: Gained carrier Feb 9 02:57:47.172845 systemd-networkd[1064]: cilium_host: Gained carrier Feb 9 02:57:47.297441 systemd-networkd[1064]: cilium_vxlan: Link UP Feb 9 02:57:47.297451 systemd-networkd[1064]: cilium_vxlan: Gained carrier Feb 9 02:57:47.446572 systemd-networkd[1064]: cilium_net: Gained IPv6LL Feb 9 02:57:47.462060 systemd-networkd[1064]: cilium_host: Gained IPv6LL Feb 9 02:57:47.879930 kernel: NET: Registered PF_ALG protocol family Feb 9 02:57:48.469189 systemd-networkd[1064]: lxc_health: Link UP Feb 9 02:57:48.485846 systemd-networkd[1064]: lxc_health: Gained carrier Feb 9 02:57:48.486002 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 02:57:48.606018 systemd-networkd[1064]: cilium_vxlan: Gained IPv6LL Feb 9 02:57:48.974487 systemd-networkd[1064]: lxc5fec13db916a: Link UP Feb 9 02:57:48.979933 kernel: eth0: renamed from tmp307bf Feb 9 02:57:48.983396 systemd-networkd[1064]: lxca043e662d09a: Link UP Feb 9 02:57:48.984746 systemd-networkd[1064]: lxc5fec13db916a: Gained carrier Feb 9 02:57:48.985172 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5fec13db916a: link becomes ready Feb 9 02:57:48.989930 kernel: eth0: renamed from tmp7c521 Feb 9 02:57:48.994454 systemd-networkd[1064]: lxca043e662d09a: Gained carrier Feb 9 02:57:48.995009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca043e662d09a: link becomes ready Feb 9 02:57:50.206057 systemd-networkd[1064]: lxc_health: Gained IPv6LL Feb 9 02:57:50.462035 systemd-networkd[1064]: lxc5fec13db916a: Gained IPv6LL Feb 9 02:57:50.654043 systemd-networkd[1064]: lxca043e662d09a: Gained IPv6LL Feb 9 02:57:51.780609 env[1156]: time="2024-02-09T02:57:51.780564994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:57:51.780841 env[1156]: time="2024-02-09T02:57:51.780608967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:57:51.780841 env[1156]: time="2024-02-09T02:57:51.780626784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:57:51.780841 env[1156]: time="2024-02-09T02:57:51.780696607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5219d09182f2d637786d5fafe52646b0bfe6967e195ac52cd77ec9acbc615c pid=3308 runtime=io.containerd.runc.v2 Feb 9 02:57:51.795212 env[1156]: time="2024-02-09T02:57:51.795172048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:57:51.795307 env[1156]: time="2024-02-09T02:57:51.795219983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:57:51.795307 env[1156]: time="2024-02-09T02:57:51.795235020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:57:51.797940 env[1156]: time="2024-02-09T02:57:51.795321698Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/307bf2ee14c44226f19afc920407674940380280fcf3e2a0b55af8e3ebbebcfe pid=3328 runtime=io.containerd.runc.v2 Feb 9 02:57:51.823583 systemd[1]: Started cri-containerd-307bf2ee14c44226f19afc920407674940380280fcf3e2a0b55af8e3ebbebcfe.scope. Feb 9 02:57:51.839398 systemd[1]: Started cri-containerd-7c5219d09182f2d637786d5fafe52646b0bfe6967e195ac52cd77ec9acbc615c.scope. Feb 9 02:57:51.870228 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 02:57:51.870258 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 02:57:51.909752 env[1156]: time="2024-02-09T02:57:51.901477970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-s98gn,Uid:ebdd308a-803a-4c92-8529-e44e94187982,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c5219d09182f2d637786d5fafe52646b0bfe6967e195ac52cd77ec9acbc615c\"" Feb 9 02:57:51.909752 env[1156]: time="2024-02-09T02:57:51.904773232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nqxcx,Uid:fca5a717-bbae-4fd5-afe8-d98910430e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"307bf2ee14c44226f19afc920407674940380280fcf3e2a0b55af8e3ebbebcfe\"" Feb 9 02:57:51.913566 env[1156]: time="2024-02-09T02:57:51.913394141Z" level=info msg="CreateContainer within sandbox \"307bf2ee14c44226f19afc920407674940380280fcf3e2a0b55af8e3ebbebcfe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 02:57:51.934581 env[1156]: time="2024-02-09T02:57:51.934551322Z" level=info msg="CreateContainer within sandbox \"7c5219d09182f2d637786d5fafe52646b0bfe6967e195ac52cd77ec9acbc615c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 02:57:52.067668 env[1156]: time="2024-02-09T02:57:52.066972858Z" level=info msg="CreateContainer within sandbox \"307bf2ee14c44226f19afc920407674940380280fcf3e2a0b55af8e3ebbebcfe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97cb215de02c815f7750bea5e71669b40ef08807f29bc2cbdad85a91e6c858a4\"" Feb 9 02:57:52.067928 env[1156]: time="2024-02-09T02:57:52.067898905Z" level=info msg="StartContainer for \"97cb215de02c815f7750bea5e71669b40ef08807f29bc2cbdad85a91e6c858a4\"" Feb 9 02:57:52.079912 env[1156]: time="2024-02-09T02:57:52.079879421Z" level=info msg="CreateContainer within sandbox \"7c5219d09182f2d637786d5fafe52646b0bfe6967e195ac52cd77ec9acbc615c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84db6b5a4cc16e11253c20f45cfc9e4ab517560eed4b01369215b139aadf571a\"" Feb 9 02:57:52.080620 env[1156]: time="2024-02-09T02:57:52.080604268Z" level=info msg="StartContainer for \"84db6b5a4cc16e11253c20f45cfc9e4ab517560eed4b01369215b139aadf571a\"" Feb 9 02:57:52.085326 systemd[1]: Started cri-containerd-97cb215de02c815f7750bea5e71669b40ef08807f29bc2cbdad85a91e6c858a4.scope. Feb 9 02:57:52.097026 systemd[1]: Started cri-containerd-84db6b5a4cc16e11253c20f45cfc9e4ab517560eed4b01369215b139aadf571a.scope. Feb 9 02:57:52.119544 env[1156]: time="2024-02-09T02:57:52.119498211Z" level=info msg="StartContainer for \"97cb215de02c815f7750bea5e71669b40ef08807f29bc2cbdad85a91e6c858a4\" returns successfully" Feb 9 02:57:52.124726 env[1156]: time="2024-02-09T02:57:52.124697731Z" level=info msg="StartContainer for \"84db6b5a4cc16e11253c20f45cfc9e4ab517560eed4b01369215b139aadf571a\" returns successfully" Feb 9 02:57:52.784345 systemd[1]: run-containerd-runc-k8s.io-7c5219d09182f2d637786d5fafe52646b0bfe6967e195ac52cd77ec9acbc615c-runc.Pm8F27.mount: Deactivated successfully. Feb 9 02:57:52.943352 kubelet[2124]: I0209 02:57:52.943332 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-nqxcx" podStartSLOduration=47.943302243 pod.CreationTimestamp="2024-02-09 02:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:57:52.941399899 +0000 UTC m=+63.237366369" watchObservedRunningTime="2024-02-09 02:57:52.943302243 +0000 UTC m=+63.239268707" Feb 9 02:57:52.952259 kubelet[2124]: I0209 02:57:52.952235 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-s98gn" podStartSLOduration=47.952213251 pod.CreationTimestamp="2024-02-09 02:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:57:52.95151393 +0000 UTC m=+63.247480399" watchObservedRunningTime="2024-02-09 02:57:52.952213251 +0000 UTC m=+63.248179722" Feb 9 02:58:27.354803 systemd[1]: Started sshd@5-139.178.70.99:22-147.75.109.163:60300.service. Feb 9 02:58:27.410358 sshd[3524]: Accepted publickey for core from 147.75.109.163 port 60300 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:27.411472 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:27.415543 systemd[1]: Started session-8.scope. Feb 9 02:58:27.415969 systemd-logind[1145]: New session 8 of user core. Feb 9 02:58:27.561974 sshd[3524]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:27.563613 systemd[1]: sshd@5-139.178.70.99:22-147.75.109.163:60300.service: Deactivated successfully. Feb 9 02:58:27.564079 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 02:58:27.564655 systemd-logind[1145]: Session 8 logged out. Waiting for processes to exit. Feb 9 02:58:27.565157 systemd-logind[1145]: Removed session 8. Feb 9 02:58:32.565227 systemd[1]: Started sshd@6-139.178.70.99:22-147.75.109.163:60316.service. Feb 9 02:58:32.609504 sshd[3538]: Accepted publickey for core from 147.75.109.163 port 60316 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:32.610423 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:32.613699 systemd[1]: Started session-9.scope. Feb 9 02:58:32.614031 systemd-logind[1145]: New session 9 of user core. Feb 9 02:58:32.723060 sshd[3538]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:32.724605 systemd-logind[1145]: Session 9 logged out. Waiting for processes to exit. Feb 9 02:58:32.724809 systemd[1]: sshd@6-139.178.70.99:22-147.75.109.163:60316.service: Deactivated successfully. Feb 9 02:58:32.725256 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 02:58:32.725979 systemd-logind[1145]: Removed session 9. Feb 9 02:58:37.725210 systemd[1]: Started sshd@7-139.178.70.99:22-147.75.109.163:51366.service. Feb 9 02:58:37.894390 sshd[3550]: Accepted publickey for core from 147.75.109.163 port 51366 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:37.895614 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:37.898577 systemd[1]: Started session-10.scope. Feb 9 02:58:37.898958 systemd-logind[1145]: New session 10 of user core. Feb 9 02:58:38.030646 sshd[3550]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:38.032541 systemd[1]: sshd@7-139.178.70.99:22-147.75.109.163:51366.service: Deactivated successfully. Feb 9 02:58:38.033079 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 02:58:38.033639 systemd-logind[1145]: Session 10 logged out. Waiting for processes to exit. Feb 9 02:58:38.034137 systemd-logind[1145]: Removed session 10. Feb 9 02:58:43.034573 systemd[1]: Started sshd@8-139.178.70.99:22-147.75.109.163:51376.service. Feb 9 02:58:43.068580 sshd[3564]: Accepted publickey for core from 147.75.109.163 port 51376 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:43.069377 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:43.072507 systemd[1]: Started session-11.scope. Feb 9 02:58:43.072966 systemd-logind[1145]: New session 11 of user core. Feb 9 02:58:43.215188 sshd[3564]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:43.216874 systemd[1]: sshd@8-139.178.70.99:22-147.75.109.163:51376.service: Deactivated successfully. Feb 9 02:58:43.217326 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 02:58:43.217604 systemd-logind[1145]: Session 11 logged out. Waiting for processes to exit. Feb 9 02:58:43.218041 systemd-logind[1145]: Removed session 11. Feb 9 02:58:48.218467 systemd[1]: Started sshd@9-139.178.70.99:22-147.75.109.163:49422.service. Feb 9 02:58:48.355406 sshd[3577]: Accepted publickey for core from 147.75.109.163 port 49422 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:48.356535 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:48.358944 systemd-logind[1145]: New session 12 of user core. Feb 9 02:58:48.359629 systemd[1]: Started session-12.scope. Feb 9 02:58:48.486895 sshd[3577]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:48.489332 systemd[1]: Started sshd@10-139.178.70.99:22-147.75.109.163:49430.service. Feb 9 02:58:48.503728 systemd-logind[1145]: Session 12 logged out. Waiting for processes to exit. Feb 9 02:58:48.503855 systemd[1]: sshd@9-139.178.70.99:22-147.75.109.163:49422.service: Deactivated successfully. Feb 9 02:58:48.504313 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 02:58:48.504862 systemd-logind[1145]: Removed session 12. Feb 9 02:58:48.609840 sshd[3588]: Accepted publickey for core from 147.75.109.163 port 49430 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:48.611016 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:48.625190 systemd-logind[1145]: New session 13 of user core. Feb 9 02:58:48.625978 systemd[1]: Started session-13.scope. Feb 9 02:58:49.767731 systemd[1]: Started sshd@11-139.178.70.99:22-147.75.109.163:49442.service. Feb 9 02:58:49.769618 sshd[3588]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:49.780095 systemd[1]: sshd@10-139.178.70.99:22-147.75.109.163:49430.service: Deactivated successfully. Feb 9 02:58:49.780593 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 02:58:49.781664 systemd-logind[1145]: Session 13 logged out. Waiting for processes to exit. Feb 9 02:58:49.783470 systemd-logind[1145]: Removed session 13. Feb 9 02:58:49.820624 sshd[3621]: Accepted publickey for core from 147.75.109.163 port 49442 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:49.822250 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:49.825328 systemd[1]: Started session-14.scope. Feb 9 02:58:49.825955 systemd-logind[1145]: New session 14 of user core. Feb 9 02:58:50.047466 sshd[3621]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:50.049207 systemd-logind[1145]: Session 14 logged out. Waiting for processes to exit. Feb 9 02:58:50.049410 systemd[1]: sshd@11-139.178.70.99:22-147.75.109.163:49442.service: Deactivated successfully. Feb 9 02:58:50.049820 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 02:58:50.050516 systemd-logind[1145]: Removed session 14. Feb 9 02:58:55.050263 systemd[1]: Started sshd@12-139.178.70.99:22-147.75.109.163:45972.service. Feb 9 02:58:55.083213 sshd[3637]: Accepted publickey for core from 147.75.109.163 port 45972 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:58:55.084348 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:58:55.088039 systemd[1]: Started session-15.scope. Feb 9 02:58:55.088311 systemd-logind[1145]: New session 15 of user core. Feb 9 02:58:55.212707 sshd[3637]: pam_unix(sshd:session): session closed for user core Feb 9 02:58:55.214549 systemd[1]: sshd@12-139.178.70.99:22-147.75.109.163:45972.service: Deactivated successfully. Feb 9 02:58:55.214966 systemd-logind[1145]: Session 15 logged out. Waiting for processes to exit. Feb 9 02:58:55.215044 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 02:58:55.215903 systemd-logind[1145]: Removed session 15. Feb 9 02:59:00.216073 systemd[1]: Started sshd@13-139.178.70.99:22-147.75.109.163:45978.service. Feb 9 02:59:00.246733 sshd[3649]: Accepted publickey for core from 147.75.109.163 port 45978 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:00.247579 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:00.250867 systemd[1]: Started session-16.scope. Feb 9 02:59:00.251312 systemd-logind[1145]: New session 16 of user core. Feb 9 02:59:00.348142 sshd[3649]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:00.350628 systemd[1]: Started sshd@14-139.178.70.99:22-147.75.109.163:45994.service. Feb 9 02:59:00.354357 systemd-logind[1145]: Session 16 logged out. Waiting for processes to exit. Feb 9 02:59:00.354622 systemd[1]: sshd@13-139.178.70.99:22-147.75.109.163:45978.service: Deactivated successfully. Feb 9 02:59:00.355100 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 02:59:00.355866 systemd-logind[1145]: Removed session 16. Feb 9 02:59:00.382239 sshd[3659]: Accepted publickey for core from 147.75.109.163 port 45994 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:00.383075 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:00.386317 systemd[1]: Started session-17.scope. Feb 9 02:59:00.386961 systemd-logind[1145]: New session 17 of user core. Feb 9 02:59:01.146011 sshd[3659]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:01.149117 systemd[1]: Started sshd@15-139.178.70.99:22-147.75.109.163:46008.service. Feb 9 02:59:01.150978 systemd[1]: sshd@14-139.178.70.99:22-147.75.109.163:45994.service: Deactivated successfully. Feb 9 02:59:01.151515 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 02:59:01.152150 systemd-logind[1145]: Session 17 logged out. Waiting for processes to exit. Feb 9 02:59:01.152890 systemd-logind[1145]: Removed session 17. Feb 9 02:59:01.193552 sshd[3669]: Accepted publickey for core from 147.75.109.163 port 46008 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:01.194482 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:01.197976 systemd-logind[1145]: New session 18 of user core. Feb 9 02:59:01.198617 systemd[1]: Started session-18.scope. Feb 9 02:59:02.368654 sshd[3669]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:02.373585 systemd[1]: Started sshd@16-139.178.70.99:22-147.75.109.163:46014.service. Feb 9 02:59:02.376945 systemd[1]: sshd@15-139.178.70.99:22-147.75.109.163:46008.service: Deactivated successfully. Feb 9 02:59:02.380161 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 02:59:02.380845 systemd-logind[1145]: Session 18 logged out. Waiting for processes to exit. Feb 9 02:59:02.381598 systemd-logind[1145]: Removed session 18. Feb 9 02:59:02.418003 sshd[3704]: Accepted publickey for core from 147.75.109.163 port 46014 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:02.418958 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:02.422313 systemd[1]: Started session-19.scope. Feb 9 02:59:02.422757 systemd-logind[1145]: New session 19 of user core. Feb 9 02:59:02.864601 sshd[3704]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:02.867547 systemd[1]: Started sshd@17-139.178.70.99:22-147.75.109.163:46030.service. Feb 9 02:59:02.870390 systemd[1]: sshd@16-139.178.70.99:22-147.75.109.163:46014.service: Deactivated successfully. Feb 9 02:59:02.871026 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 02:59:02.872005 systemd-logind[1145]: Session 19 logged out. Waiting for processes to exit. Feb 9 02:59:02.872749 systemd-logind[1145]: Removed session 19. Feb 9 02:59:02.905526 sshd[3764]: Accepted publickey for core from 147.75.109.163 port 46030 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:02.906957 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:02.910238 systemd[1]: Started session-20.scope. Feb 9 02:59:02.910439 systemd-logind[1145]: New session 20 of user core. Feb 9 02:59:03.010871 sshd[3764]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:03.012612 systemd[1]: sshd@17-139.178.70.99:22-147.75.109.163:46030.service: Deactivated successfully. Feb 9 02:59:03.013136 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 02:59:03.013662 systemd-logind[1145]: Session 20 logged out. Waiting for processes to exit. Feb 9 02:59:03.014198 systemd-logind[1145]: Removed session 20. Feb 9 02:59:08.015052 systemd[1]: Started sshd@18-139.178.70.99:22-147.75.109.163:41682.service. Feb 9 02:59:08.050515 sshd[3806]: Accepted publickey for core from 147.75.109.163 port 41682 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:08.051700 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:08.055448 systemd[1]: Started session-21.scope. Feb 9 02:59:08.055995 systemd-logind[1145]: New session 21 of user core. Feb 9 02:59:08.193185 sshd[3806]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:08.194793 systemd[1]: sshd@18-139.178.70.99:22-147.75.109.163:41682.service: Deactivated successfully. Feb 9 02:59:08.195272 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 02:59:08.195737 systemd-logind[1145]: Session 21 logged out. Waiting for processes to exit. Feb 9 02:59:08.196276 systemd-logind[1145]: Removed session 21. Feb 9 02:59:13.196884 systemd[1]: Started sshd@19-139.178.70.99:22-147.75.109.163:41686.service. Feb 9 02:59:13.227598 sshd[3818]: Accepted publickey for core from 147.75.109.163 port 41686 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:13.228852 sshd[3818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:13.232728 systemd[1]: Started session-22.scope. Feb 9 02:59:13.233243 systemd-logind[1145]: New session 22 of user core. Feb 9 02:59:13.325419 sshd[3818]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:13.326858 systemd[1]: sshd@19-139.178.70.99:22-147.75.109.163:41686.service: Deactivated successfully. Feb 9 02:59:13.327360 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 02:59:13.327806 systemd-logind[1145]: Session 22 logged out. Waiting for processes to exit. Feb 9 02:59:13.328341 systemd-logind[1145]: Removed session 22. Feb 9 02:59:18.329115 systemd[1]: Started sshd@20-139.178.70.99:22-147.75.109.163:38106.service. Feb 9 02:59:18.359440 sshd[3831]: Accepted publickey for core from 147.75.109.163 port 38106 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:18.360677 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:18.363851 systemd[1]: Started session-23.scope. Feb 9 02:59:18.364346 systemd-logind[1145]: New session 23 of user core. Feb 9 02:59:18.454278 sshd[3831]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:18.456570 systemd[1]: sshd@20-139.178.70.99:22-147.75.109.163:38106.service: Deactivated successfully. Feb 9 02:59:18.457279 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 02:59:18.458151 systemd-logind[1145]: Session 23 logged out. Waiting for processes to exit. Feb 9 02:59:18.458736 systemd-logind[1145]: Removed session 23. Feb 9 02:59:23.457313 systemd[1]: Started sshd@21-139.178.70.99:22-147.75.109.163:38118.service. Feb 9 02:59:23.495097 sshd[3843]: Accepted publickey for core from 147.75.109.163 port 38118 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:23.496270 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:23.498865 systemd-logind[1145]: New session 24 of user core. Feb 9 02:59:23.500103 systemd[1]: Started session-24.scope. Feb 9 02:59:23.598910 sshd[3843]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:23.601691 systemd[1]: Started sshd@22-139.178.70.99:22-147.75.109.163:38120.service. Feb 9 02:59:23.603126 systemd[1]: sshd@21-139.178.70.99:22-147.75.109.163:38118.service: Deactivated successfully. Feb 9 02:59:23.603589 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 02:59:23.604557 systemd-logind[1145]: Session 24 logged out. Waiting for processes to exit. Feb 9 02:59:23.605209 systemd-logind[1145]: Removed session 24. Feb 9 02:59:23.635158 sshd[3854]: Accepted publickey for core from 147.75.109.163 port 38120 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:23.636110 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:23.639185 systemd[1]: Started session-25.scope. Feb 9 02:59:23.640244 systemd-logind[1145]: New session 25 of user core. Feb 9 02:59:25.295947 env[1156]: time="2024-02-09T02:59:25.295901270Z" level=info msg="StopContainer for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" with timeout 30 (s)" Feb 9 02:59:25.296994 env[1156]: time="2024-02-09T02:59:25.296977617Z" level=info msg="Stop container \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" with signal terminated" Feb 9 02:59:25.303181 systemd[1]: run-containerd-runc-k8s.io-2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc-runc.hjZRob.mount: Deactivated successfully. Feb 9 02:59:25.322644 systemd[1]: cri-containerd-11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24.scope: Deactivated successfully. Feb 9 02:59:25.335407 env[1156]: time="2024-02-09T02:59:25.335367102Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 02:59:25.338205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24-rootfs.mount: Deactivated successfully. Feb 9 02:59:25.341838 env[1156]: time="2024-02-09T02:59:25.341808400Z" level=info msg="shim disconnected" id=11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24 Feb 9 02:59:25.342236 env[1156]: time="2024-02-09T02:59:25.342218838Z" level=warning msg="cleaning up after shim disconnected" id=11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24 namespace=k8s.io Feb 9 02:59:25.342236 env[1156]: time="2024-02-09T02:59:25.342232519Z" level=info msg="cleaning up dead shim" Feb 9 02:59:25.342432 env[1156]: time="2024-02-09T02:59:25.342109391Z" level=info msg="StopContainer for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" with timeout 1 (s)" Feb 9 02:59:25.342601 env[1156]: time="2024-02-09T02:59:25.342585689Z" level=info msg="Stop container \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" with signal terminated" Feb 9 02:59:25.351619 env[1156]: time="2024-02-09T02:59:25.351590783Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3900 runtime=io.containerd.runc.v2\n" Feb 9 02:59:25.353105 systemd-networkd[1064]: lxc_health: Link DOWN Feb 9 02:59:25.353111 systemd-networkd[1064]: lxc_health: Lost carrier Feb 9 02:59:25.373583 env[1156]: time="2024-02-09T02:59:25.373552977Z" level=info msg="StopContainer for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" returns successfully" Feb 9 02:59:25.374034 env[1156]: time="2024-02-09T02:59:25.374015796Z" level=info msg="StopPodSandbox for \"91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc\"" Feb 9 02:59:25.374144 env[1156]: time="2024-02-09T02:59:25.374132235Z" level=info msg="Container to stop \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:25.375158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc-shm.mount: Deactivated successfully. Feb 9 02:59:25.379184 systemd[1]: cri-containerd-91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc.scope: Deactivated successfully. Feb 9 02:59:25.424166 systemd[1]: cri-containerd-2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc.scope: Deactivated successfully. Feb 9 02:59:25.424344 systemd[1]: cri-containerd-2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc.scope: Consumed 4.661s CPU time. Feb 9 02:59:25.440737 env[1156]: time="2024-02-09T02:59:25.440614686Z" level=info msg="shim disconnected" id=91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc Feb 9 02:59:25.440737 env[1156]: time="2024-02-09T02:59:25.440648966Z" level=warning msg="cleaning up after shim disconnected" id=91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc namespace=k8s.io Feb 9 02:59:25.440737 env[1156]: time="2024-02-09T02:59:25.440655841Z" level=info msg="cleaning up dead shim" Feb 9 02:59:25.446395 env[1156]: time="2024-02-09T02:59:25.446377247Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3953 runtime=io.containerd.runc.v2\n" Feb 9 02:59:25.451502 env[1156]: time="2024-02-09T02:59:25.451412487Z" level=info msg="TearDown network for sandbox \"91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc\" successfully" Feb 9 02:59:25.460587 env[1156]: time="2024-02-09T02:59:25.452221884Z" level=info msg="StopPodSandbox for \"91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc\" returns successfully" Feb 9 02:59:25.474974 env[1156]: time="2024-02-09T02:59:25.474940741Z" level=info msg="shim disconnected" id=2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc Feb 9 02:59:25.474974 env[1156]: time="2024-02-09T02:59:25.474972389Z" level=warning msg="cleaning up after shim disconnected" id=2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc namespace=k8s.io Feb 9 02:59:25.474974 env[1156]: time="2024-02-09T02:59:25.474978813Z" level=info msg="cleaning up dead shim" Feb 9 02:59:25.480330 env[1156]: time="2024-02-09T02:59:25.480301433Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3966 runtime=io.containerd.runc.v2\n" Feb 9 02:59:25.502923 env[1156]: time="2024-02-09T02:59:25.502883461Z" level=info msg="StopContainer for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" returns successfully" Feb 9 02:59:25.503389 env[1156]: time="2024-02-09T02:59:25.503367397Z" level=info msg="StopPodSandbox for \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\"" Feb 9 02:59:25.503425 env[1156]: time="2024-02-09T02:59:25.503413349Z" level=info msg="Container to stop \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:25.503452 env[1156]: time="2024-02-09T02:59:25.503423797Z" level=info msg="Container to stop \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:25.503452 env[1156]: time="2024-02-09T02:59:25.503430651Z" level=info msg="Container to stop \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:25.503452 env[1156]: time="2024-02-09T02:59:25.503437812Z" level=info msg="Container to stop \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:25.503452 env[1156]: time="2024-02-09T02:59:25.503444227Z" level=info msg="Container to stop \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:25.507452 systemd[1]: cri-containerd-454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab.scope: Deactivated successfully. Feb 9 02:59:25.582710 kubelet[2124]: I0209 02:59:25.581443 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98f1c17-4764-4037-a6c8-85cccbdd19a0-cilium-config-path\") pod \"f98f1c17-4764-4037-a6c8-85cccbdd19a0\" (UID: \"f98f1c17-4764-4037-a6c8-85cccbdd19a0\") " Feb 9 02:59:25.582710 kubelet[2124]: I0209 02:59:25.581494 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdtx\" (UniqueName: \"kubernetes.io/projected/f98f1c17-4764-4037-a6c8-85cccbdd19a0-kube-api-access-cqdtx\") pod \"f98f1c17-4764-4037-a6c8-85cccbdd19a0\" (UID: \"f98f1c17-4764-4037-a6c8-85cccbdd19a0\") " Feb 9 02:59:25.603842 env[1156]: time="2024-02-09T02:59:25.603809933Z" level=info msg="shim disconnected" id=454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab Feb 9 02:59:25.604048 env[1156]: time="2024-02-09T02:59:25.604035694Z" level=warning msg="cleaning up after shim disconnected" id=454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab namespace=k8s.io Feb 9 02:59:25.604110 env[1156]: time="2024-02-09T02:59:25.604099646Z" level=info msg="cleaning up dead shim" Feb 9 02:59:25.609989 env[1156]: time="2024-02-09T02:59:25.609959185Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3996 runtime=io.containerd.runc.v2\n" Feb 9 02:59:25.620547 env[1156]: time="2024-02-09T02:59:25.620515839Z" level=info msg="TearDown network for sandbox \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" successfully" Feb 9 02:59:25.620666 env[1156]: time="2024-02-09T02:59:25.620653492Z" level=info msg="StopPodSandbox for \"454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab\" returns successfully" Feb 9 02:59:25.646595 kubelet[2124]: W0209 02:59:25.646548 2124 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f98f1c17-4764-4037-a6c8-85cccbdd19a0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 02:59:25.650091 kubelet[2124]: I0209 02:59:25.649564 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98f1c17-4764-4037-a6c8-85cccbdd19a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f98f1c17-4764-4037-a6c8-85cccbdd19a0" (UID: "f98f1c17-4764-4037-a6c8-85cccbdd19a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 02:59:25.665433 kubelet[2124]: I0209 02:59:25.665398 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98f1c17-4764-4037-a6c8-85cccbdd19a0-kube-api-access-cqdtx" (OuterVolumeSpecName: "kube-api-access-cqdtx") pod "f98f1c17-4764-4037-a6c8-85cccbdd19a0" (UID: "f98f1c17-4764-4037-a6c8-85cccbdd19a0"). InnerVolumeSpecName "kube-api-access-cqdtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 02:59:25.682446 kubelet[2124]: I0209 02:59:25.682426 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-net\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.682619 kubelet[2124]: I0209 02:59:25.682610 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a377a2-7591-4973-bffb-c582258d3312-clustermesh-secrets\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.682686 kubelet[2124]: I0209 02:59:25.682678 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-run\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.682755 kubelet[2124]: I0209 02:59:25.682748 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85n4\" (UniqueName: \"kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-kube-api-access-q85n4\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.682826 kubelet[2124]: I0209 02:59:25.682818 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-cgroup\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.682894 kubelet[2124]: I0209 02:59:25.682887 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a377a2-7591-4973-bffb-c582258d3312-cilium-config-path\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.682983 kubelet[2124]: I0209 02:59:25.682967 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cni-path\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683044 kubelet[2124]: I0209 02:59:25.683037 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-lib-modules\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683109 kubelet[2124]: I0209 02:59:25.683102 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-hubble-tls\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683174 kubelet[2124]: I0209 02:59:25.683165 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-hostproc\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683240 kubelet[2124]: I0209 02:59:25.683233 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-bpf-maps\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683309 kubelet[2124]: I0209 02:59:25.683301 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-etc-cni-netd\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683374 kubelet[2124]: I0209 02:59:25.683366 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-xtables-lock\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683439 kubelet[2124]: I0209 02:59:25.683431 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-kernel\") pod \"81a377a2-7591-4973-bffb-c582258d3312\" (UID: \"81a377a2-7591-4973-bffb-c582258d3312\") " Feb 9 02:59:25.683530 kubelet[2124]: I0209 02:59:25.683521 2124 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cqdtx\" (UniqueName: \"kubernetes.io/projected/f98f1c17-4764-4037-a6c8-85cccbdd19a0-kube-api-access-cqdtx\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.683593 kubelet[2124]: I0209 02:59:25.683586 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98f1c17-4764-4037-a6c8-85cccbdd19a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.683684 kubelet[2124]: I0209 02:59:25.683674 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.683864 kubelet[2124]: I0209 02:59:25.683853 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.683940 kubelet[2124]: I0209 02:59:25.683931 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684351 kubelet[2124]: I0209 02:59:25.684331 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684499 kubelet[2124]: I0209 02:59:25.684489 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cni-path" (OuterVolumeSpecName: "cni-path") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684663 kubelet[2124]: I0209 02:59:25.684570 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-hostproc" (OuterVolumeSpecName: "hostproc") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684663 kubelet[2124]: I0209 02:59:25.684578 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684663 kubelet[2124]: I0209 02:59:25.684584 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684663 kubelet[2124]: I0209 02:59:25.684591 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.684663 kubelet[2124]: W0209 02:59:25.684587 2124 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/81a377a2-7591-4973-bffb-c582258d3312/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 02:59:25.684894 kubelet[2124]: I0209 02:59:25.684605 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:25.685937 kubelet[2124]: I0209 02:59:25.685907 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a377a2-7591-4973-bffb-c582258d3312-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 02:59:25.687341 kubelet[2124]: I0209 02:59:25.687323 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-kube-api-access-q85n4" (OuterVolumeSpecName: "kube-api-access-q85n4") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "kube-api-access-q85n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 02:59:25.687984 kubelet[2124]: I0209 02:59:25.687448 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a377a2-7591-4973-bffb-c582258d3312-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 02:59:25.688960 kubelet[2124]: I0209 02:59:25.688947 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "81a377a2-7591-4973-bffb-c582258d3312" (UID: "81a377a2-7591-4973-bffb-c582258d3312"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 02:59:25.784368 kubelet[2124]: I0209 02:59:25.784346 2124 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784518 kubelet[2124]: I0209 02:59:25.784508 2124 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784584 kubelet[2124]: I0209 02:59:25.784576 2124 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784710 2124 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784720 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784727 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784733 2124 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a377a2-7591-4973-bffb-c582258d3312-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784739 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784745 2124 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-q85n4\" (UniqueName: \"kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-kube-api-access-q85n4\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784750 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.784984 kubelet[2124]: I0209 02:59:25.784757 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a377a2-7591-4973-bffb-c582258d3312-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.785436 kubelet[2124]: I0209 02:59:25.784762 2124 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.785436 kubelet[2124]: I0209 02:59:25.784767 2124 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a377a2-7591-4973-bffb-c582258d3312-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.785436 kubelet[2124]: I0209 02:59:25.784778 2124 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a377a2-7591-4973-bffb-c582258d3312-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:25.843637 systemd[1]: Removed slice kubepods-besteffort-podf98f1c17_4764_4037_a6c8_85cccbdd19a0.slice. Feb 9 02:59:25.844978 systemd[1]: Removed slice kubepods-burstable-pod81a377a2_7591_4973_bffb_c582258d3312.slice. Feb 9 02:59:25.845031 systemd[1]: kubepods-burstable-pod81a377a2_7591_4973_bffb_c582258d3312.slice: Consumed 4.725s CPU time. Feb 9 02:59:26.064181 kubelet[2124]: I0209 02:59:26.064156 2124 scope.go:115] "RemoveContainer" containerID="2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc" Feb 9 02:59:26.147679 env[1156]: time="2024-02-09T02:59:26.147583461Z" level=info msg="RemoveContainer for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\"" Feb 9 02:59:26.162525 env[1156]: time="2024-02-09T02:59:26.162483527Z" level=info msg="RemoveContainer for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" returns successfully" Feb 9 02:59:26.170770 kubelet[2124]: I0209 02:59:26.170740 2124 scope.go:115] "RemoveContainer" containerID="307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff" Feb 9 02:59:26.191249 env[1156]: time="2024-02-09T02:59:26.191219950Z" level=info msg="RemoveContainer for \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\"" Feb 9 02:59:26.221533 env[1156]: time="2024-02-09T02:59:26.221501533Z" level=info msg="RemoveContainer for \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\" returns successfully" Feb 9 02:59:26.221826 kubelet[2124]: I0209 02:59:26.221805 2124 scope.go:115] "RemoveContainer" containerID="0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e" Feb 9 02:59:26.230085 env[1156]: time="2024-02-09T02:59:26.230057305Z" level=info msg="RemoveContainer for \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\"" Feb 9 02:59:26.252734 env[1156]: time="2024-02-09T02:59:26.252699235Z" level=info msg="RemoveContainer for \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\" returns successfully" Feb 9 02:59:26.253043 kubelet[2124]: I0209 02:59:26.253023 2124 scope.go:115] "RemoveContainer" containerID="6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6" Feb 9 02:59:26.270021 env[1156]: time="2024-02-09T02:59:26.269990117Z" level=info msg="RemoveContainer for \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\"" Feb 9 02:59:26.285492 env[1156]: time="2024-02-09T02:59:26.285450202Z" level=info msg="RemoveContainer for \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\" returns successfully" Feb 9 02:59:26.285735 kubelet[2124]: I0209 02:59:26.285720 2124 scope.go:115] "RemoveContainer" containerID="dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae" Feb 9 02:59:26.292031 env[1156]: time="2024-02-09T02:59:26.292000050Z" level=info msg="RemoveContainer for \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\"" Feb 9 02:59:26.300207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc-rootfs.mount: Deactivated successfully. Feb 9 02:59:26.300304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab-rootfs.mount: Deactivated successfully. Feb 9 02:59:26.300351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-454609113eecb171c230335e27291104b221c80cf8ee26c31064ca4c122341ab-shm.mount: Deactivated successfully. Feb 9 02:59:26.300391 systemd[1]: var-lib-kubelet-pods-81a377a2\x2d7591\x2d4973\x2dbffb\x2dc582258d3312-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 02:59:26.300427 systemd[1]: var-lib-kubelet-pods-81a377a2\x2d7591\x2d4973\x2dbffb\x2dc582258d3312-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 02:59:26.300459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91251ff01b9e5477d96cccd9064766240a4cd5e8d7691640570ef1d6d57645fc-rootfs.mount: Deactivated successfully. Feb 9 02:59:26.300491 systemd[1]: var-lib-kubelet-pods-81a377a2\x2d7591\x2d4973\x2dbffb\x2dc582258d3312-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq85n4.mount: Deactivated successfully. Feb 9 02:59:26.300524 systemd[1]: var-lib-kubelet-pods-f98f1c17\x2d4764\x2d4037\x2da6c8\x2d85cccbdd19a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqdtx.mount: Deactivated successfully. Feb 9 02:59:26.310490 env[1156]: time="2024-02-09T02:59:26.310453936Z" level=info msg="RemoveContainer for \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\" returns successfully" Feb 9 02:59:26.310715 kubelet[2124]: I0209 02:59:26.310665 2124 scope.go:115] "RemoveContainer" containerID="2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc" Feb 9 02:59:26.310901 env[1156]: time="2024-02-09T02:59:26.310846144Z" level=error msg="ContainerStatus for \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\": not found" Feb 9 02:59:26.337274 kubelet[2124]: E0209 02:59:26.337229 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\": not found" containerID="2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc" Feb 9 02:59:26.351017 kubelet[2124]: I0209 02:59:26.350971 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc} err="failed to get container status \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ed74fbed19c0bc56f7b56aa39d386e229c27d28c1f279a8d9e61e276127dccc\": not found" Feb 9 02:59:26.351017 kubelet[2124]: I0209 02:59:26.351013 2124 scope.go:115] "RemoveContainer" containerID="307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff" Feb 9 02:59:26.351720 env[1156]: time="2024-02-09T02:59:26.351399847Z" level=error msg="ContainerStatus for \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\": not found" Feb 9 02:59:26.351775 kubelet[2124]: E0209 02:59:26.351542 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\": not found" containerID="307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff" Feb 9 02:59:26.351775 kubelet[2124]: I0209 02:59:26.351571 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff} err="failed to get container status \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\": rpc error: code = NotFound desc = an error occurred when try to find container \"307cb77ee7c2e1050b8f451f797a0d385fb6bcf66bea136be8d633f4df394eff\": not found" Feb 9 02:59:26.351775 kubelet[2124]: I0209 02:59:26.351579 2124 scope.go:115] "RemoveContainer" containerID="0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e" Feb 9 02:59:26.352085 env[1156]: time="2024-02-09T02:59:26.351675304Z" level=error msg="ContainerStatus for \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\": not found" Feb 9 02:59:26.352121 kubelet[2124]: E0209 02:59:26.352003 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\": not found" containerID="0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e" Feb 9 02:59:26.352121 kubelet[2124]: I0209 02:59:26.352017 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e} err="failed to get container status \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0636db98a6d991cda79557a27368c3194fc515c7f774c3672a3f9e9e75df530e\": not found" Feb 9 02:59:26.352121 kubelet[2124]: I0209 02:59:26.352023 2124 scope.go:115] "RemoveContainer" containerID="6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6" Feb 9 02:59:26.352278 env[1156]: time="2024-02-09T02:59:26.352249813Z" level=error msg="ContainerStatus for \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\": not found" Feb 9 02:59:26.352490 kubelet[2124]: E0209 02:59:26.352404 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\": not found" containerID="6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6" Feb 9 02:59:26.352490 kubelet[2124]: I0209 02:59:26.352418 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6} err="failed to get container status \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6495df5d4cc64fcd429281d7d21c7e69a9059b2346df191a3add8e0be176e6d6\": not found" Feb 9 02:59:26.352490 kubelet[2124]: I0209 02:59:26.352424 2124 scope.go:115] "RemoveContainer" containerID="dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae" Feb 9 02:59:26.352688 env[1156]: time="2024-02-09T02:59:26.352662926Z" level=error msg="ContainerStatus for \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\": not found" Feb 9 02:59:26.352895 kubelet[2124]: E0209 02:59:26.352816 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\": not found" containerID="dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae" Feb 9 02:59:26.352895 kubelet[2124]: I0209 02:59:26.352831 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae} err="failed to get container status \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcb056c3cbc002c72ce6b93384bea040bbc35a266e75cb708e8120c54db12eae\": not found" Feb 9 02:59:26.352895 kubelet[2124]: I0209 02:59:26.352837 2124 scope.go:115] "RemoveContainer" containerID="11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24" Feb 9 02:59:26.353643 env[1156]: time="2024-02-09T02:59:26.353621122Z" level=info msg="RemoveContainer for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\"" Feb 9 02:59:26.372680 env[1156]: time="2024-02-09T02:59:26.372651297Z" level=info msg="RemoveContainer for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" returns successfully" Feb 9 02:59:26.373060 kubelet[2124]: I0209 02:59:26.372993 2124 scope.go:115] "RemoveContainer" containerID="11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24" Feb 9 02:59:26.373202 env[1156]: time="2024-02-09T02:59:26.373159490Z" level=error msg="ContainerStatus for \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\": not found" Feb 9 02:59:26.373282 kubelet[2124]: E0209 02:59:26.373269 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\": not found" containerID="11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24" Feb 9 02:59:26.373314 kubelet[2124]: I0209 02:59:26.373291 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24} err="failed to get container status \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\": rpc error: code = NotFound desc = an error occurred when try to find container \"11d006bf1aa208fe99a6bd82e0d6ab1263cd8e18f4a91d1c1468c9cf4bcffb24\": not found" Feb 9 02:59:27.272388 systemd[1]: Started sshd@23-139.178.70.99:22-147.75.109.163:33148.service. Feb 9 02:59:27.281809 sshd[3854]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:27.320312 systemd-logind[1145]: Session 25 logged out. Waiting for processes to exit. Feb 9 02:59:27.320413 systemd[1]: sshd@22-139.178.70.99:22-147.75.109.163:38120.service: Deactivated successfully. Feb 9 02:59:27.320912 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 02:59:27.321621 systemd-logind[1145]: Removed session 25. Feb 9 02:59:27.483735 sshd[4015]: Accepted publickey for core from 147.75.109.163 port 33148 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:27.484873 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:27.491013 systemd-logind[1145]: New session 26 of user core. Feb 9 02:59:27.491277 systemd[1]: Started session-26.scope. Feb 9 02:59:27.819592 kubelet[2124]: I0209 02:59:27.819574 2124 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=81a377a2-7591-4973-bffb-c582258d3312 path="/var/lib/kubelet/pods/81a377a2-7591-4973-bffb-c582258d3312/volumes" Feb 9 02:59:27.865580 kubelet[2124]: I0209 02:59:27.865556 2124 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f98f1c17-4764-4037-a6c8-85cccbdd19a0 path="/var/lib/kubelet/pods/f98f1c17-4764-4037-a6c8-85cccbdd19a0/volumes" Feb 9 02:59:27.990403 systemd[1]: Started sshd@24-139.178.70.99:22-147.75.109.163:33156.service. Feb 9 02:59:27.991670 sshd[4015]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:27.997441 systemd-logind[1145]: Session 26 logged out. Waiting for processes to exit. Feb 9 02:59:27.997782 systemd[1]: sshd@23-139.178.70.99:22-147.75.109.163:33148.service: Deactivated successfully. Feb 9 02:59:27.998278 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 02:59:27.999191 systemd-logind[1145]: Removed session 26. Feb 9 02:59:28.025610 kubelet[2124]: I0209 02:59:28.025581 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:59:28.025888 kubelet[2124]: E0209 02:59:28.025873 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f98f1c17-4764-4037-a6c8-85cccbdd19a0" containerName="cilium-operator" Feb 9 02:59:28.025888 kubelet[2124]: E0209 02:59:28.025887 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a377a2-7591-4973-bffb-c582258d3312" containerName="mount-cgroup" Feb 9 02:59:28.026011 kubelet[2124]: E0209 02:59:28.025894 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a377a2-7591-4973-bffb-c582258d3312" containerName="apply-sysctl-overwrites" Feb 9 02:59:28.026011 kubelet[2124]: E0209 02:59:28.025901 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a377a2-7591-4973-bffb-c582258d3312" containerName="mount-bpf-fs" Feb 9 02:59:28.026011 kubelet[2124]: E0209 02:59:28.025907 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a377a2-7591-4973-bffb-c582258d3312" containerName="clean-cilium-state" Feb 9 02:59:28.026011 kubelet[2124]: E0209 02:59:28.025935 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a377a2-7591-4973-bffb-c582258d3312" containerName="cilium-agent" Feb 9 02:59:28.026615 kubelet[2124]: I0209 02:59:28.026601 2124 memory_manager.go:346] "RemoveStaleState removing state" podUID="f98f1c17-4764-4037-a6c8-85cccbdd19a0" containerName="cilium-operator" Feb 9 02:59:28.026615 kubelet[2124]: I0209 02:59:28.026615 2124 memory_manager.go:346] "RemoveStaleState removing state" podUID="81a377a2-7591-4973-bffb-c582258d3312" containerName="cilium-agent" Feb 9 02:59:28.029261 sshd[4025]: Accepted publickey for core from 147.75.109.163 port 33156 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:28.030573 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:28.032714 systemd[1]: Created slice kubepods-burstable-pod2303ef9a_df4f_44aa_a2b4_bfab5d5a39f4.slice. Feb 9 02:59:28.037863 systemd[1]: Started session-27.scope. Feb 9 02:59:28.038685 systemd-logind[1145]: New session 27 of user core. Feb 9 02:59:28.124776 kubelet[2124]: I0209 02:59:28.124703 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-run\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124776 kubelet[2124]: I0209 02:59:28.124734 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-bpf-maps\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124776 kubelet[2124]: I0209 02:59:28.124752 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-config-path\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124776 kubelet[2124]: I0209 02:59:28.124768 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hubble-tls\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124963 kubelet[2124]: I0209 02:59:28.124782 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-lib-modules\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124963 kubelet[2124]: I0209 02:59:28.124799 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cni-path\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124963 kubelet[2124]: I0209 02:59:28.124812 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-etc-cni-netd\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124963 kubelet[2124]: I0209 02:59:28.124825 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-net\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124963 kubelet[2124]: I0209 02:59:28.124836 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-xtables-lock\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.124963 kubelet[2124]: I0209 02:59:28.124849 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hostproc\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.125085 kubelet[2124]: I0209 02:59:28.124861 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-cgroup\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.125085 kubelet[2124]: I0209 02:59:28.124873 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-clustermesh-secrets\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.125085 kubelet[2124]: I0209 02:59:28.124886 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-ipsec-secrets\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.125085 kubelet[2124]: I0209 02:59:28.124898 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-kernel\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.125085 kubelet[2124]: I0209 02:59:28.124910 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfnf\" (UniqueName: \"kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-kube-api-access-wzfnf\") pod \"cilium-xzjfj\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " pod="kube-system/cilium-xzjfj" Feb 9 02:59:28.335465 env[1156]: time="2024-02-09T02:59:28.335171097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xzjfj,Uid:2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4,Namespace:kube-system,Attempt:0,}" Feb 9 02:59:28.447518 env[1156]: time="2024-02-09T02:59:28.447429057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:59:28.447661 env[1156]: time="2024-02-09T02:59:28.447644782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:59:28.447725 env[1156]: time="2024-02-09T02:59:28.447711900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:59:28.448018 env[1156]: time="2024-02-09T02:59:28.447982207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715 pid=4048 runtime=io.containerd.runc.v2 Feb 9 02:59:28.451766 sshd[4025]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:28.453118 systemd[1]: Started sshd@25-139.178.70.99:22-147.75.109.163:33168.service. Feb 9 02:59:28.456780 systemd[1]: sshd@24-139.178.70.99:22-147.75.109.163:33156.service: Deactivated successfully. Feb 9 02:59:28.457393 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 02:59:28.459482 systemd-logind[1145]: Session 27 logged out. Waiting for processes to exit. Feb 9 02:59:28.463138 systemd-logind[1145]: Removed session 27. Feb 9 02:59:28.474227 systemd[1]: Started cri-containerd-9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715.scope. Feb 9 02:59:28.502545 sshd[4064]: Accepted publickey for core from 147.75.109.163 port 33168 ssh2: RSA SHA256:G/HZf4mzZLfmin3SA9FpQ0tzBPLxutwkENt905hiC+Y Feb 9 02:59:28.502835 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 02:59:28.506007 systemd[1]: Started session-28.scope. Feb 9 02:59:28.507112 systemd-logind[1145]: New session 28 of user core. Feb 9 02:59:28.516509 env[1156]: time="2024-02-09T02:59:28.516483811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xzjfj,Uid:2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715\"" Feb 9 02:59:28.519407 env[1156]: time="2024-02-09T02:59:28.519383782Z" level=info msg="CreateContainer within sandbox \"9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 02:59:28.684192 env[1156]: time="2024-02-09T02:59:28.684150955Z" level=info msg="CreateContainer within sandbox \"9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\"" Feb 9 02:59:28.684683 env[1156]: time="2024-02-09T02:59:28.684652308Z" level=info msg="StartContainer for \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\"" Feb 9 02:59:28.697967 systemd[1]: Started cri-containerd-de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf.scope. Feb 9 02:59:28.719167 systemd[1]: cri-containerd-de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf.scope: Deactivated successfully. Feb 9 02:59:28.796178 env[1156]: time="2024-02-09T02:59:28.796118884Z" level=info msg="shim disconnected" id=de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf Feb 9 02:59:28.796345 env[1156]: time="2024-02-09T02:59:28.796180221Z" level=warning msg="cleaning up after shim disconnected" id=de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf namespace=k8s.io Feb 9 02:59:28.796345 env[1156]: time="2024-02-09T02:59:28.796196085Z" level=info msg="cleaning up dead shim" Feb 9 02:59:28.804358 env[1156]: time="2024-02-09T02:59:28.804214666Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4122 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T02:59:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 02:59:28.804711 env[1156]: time="2024-02-09T02:59:28.804640517Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Feb 9 02:59:28.805612 env[1156]: time="2024-02-09T02:59:28.804803120Z" level=error msg="Failed to pipe stdout of container \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\"" error="reading from a closed fifo" Feb 9 02:59:28.805699 env[1156]: time="2024-02-09T02:59:28.805573550Z" level=error msg="Failed to pipe stderr of container \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\"" error="reading from a closed fifo" Feb 9 02:59:28.813249 env[1156]: time="2024-02-09T02:59:28.813198988Z" level=error msg="StartContainer for \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 02:59:28.813619 kubelet[2124]: E0209 02:59:28.813487 2124 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf" Feb 9 02:59:28.826326 kubelet[2124]: E0209 02:59:28.826256 2124 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 02:59:28.826326 kubelet[2124]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 02:59:28.826326 kubelet[2124]: rm /hostbin/cilium-mount Feb 9 02:59:28.826326 kubelet[2124]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wzfnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xzjfj_kube-system(2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 02:59:28.839513 kubelet[2124]: E0209 02:59:28.826299 2124 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xzjfj" podUID=2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4 Feb 9 02:59:29.162999 env[1156]: time="2024-02-09T02:59:29.162288346Z" level=info msg="StopPodSandbox for \"9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715\"" Feb 9 02:59:29.162999 env[1156]: time="2024-02-09T02:59:29.162332832Z" level=info msg="Container to stop \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 02:59:29.173067 systemd[1]: cri-containerd-9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715.scope: Deactivated successfully. Feb 9 02:59:29.208602 env[1156]: time="2024-02-09T02:59:29.208552198Z" level=info msg="shim disconnected" id=9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715 Feb 9 02:59:29.209257 env[1156]: time="2024-02-09T02:59:29.209240711Z" level=warning msg="cleaning up after shim disconnected" id=9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715 namespace=k8s.io Feb 9 02:59:29.209352 env[1156]: time="2024-02-09T02:59:29.209339378Z" level=info msg="cleaning up dead shim" Feb 9 02:59:29.214858 env[1156]: time="2024-02-09T02:59:29.214819499Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4153 runtime=io.containerd.runc.v2\n" Feb 9 02:59:29.215130 env[1156]: time="2024-02-09T02:59:29.215109512Z" level=info msg="TearDown network for sandbox \"9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715\" successfully" Feb 9 02:59:29.215130 env[1156]: time="2024-02-09T02:59:29.215126996Z" level=info msg="StopPodSandbox for \"9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715\" returns successfully" Feb 9 02:59:29.232484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9920c806eb6fa39963c410cd303282f61909479e3a4c2a568f20abaf33de4715-shm.mount: Deactivated successfully. Feb 9 02:59:29.333957 kubelet[2124]: I0209 02:59:29.333846 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.333957 kubelet[2124]: I0209 02:59:29.333891 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cni-path\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.333957 kubelet[2124]: I0209 02:59:29.333931 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-xtables-lock\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.333957 kubelet[2124]: I0209 02:59:29.333954 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hostproc\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334136 kubelet[2124]: I0209 02:59:29.333973 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzfnf\" (UniqueName: \"kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-kube-api-access-wzfnf\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334136 kubelet[2124]: I0209 02:59:29.333984 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-etc-cni-netd\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334136 kubelet[2124]: I0209 02:59:29.333994 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-net\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334136 kubelet[2124]: I0209 02:59:29.334006 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-config-path\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334136 kubelet[2124]: I0209 02:59:29.334017 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hubble-tls\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334136 kubelet[2124]: I0209 02:59:29.334027 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-lib-modules\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334258 kubelet[2124]: I0209 02:59:29.334037 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-cgroup\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334258 kubelet[2124]: I0209 02:59:29.334049 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-ipsec-secrets\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334258 kubelet[2124]: I0209 02:59:29.334060 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-run\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334258 kubelet[2124]: I0209 02:59:29.334069 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-bpf-maps\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334258 kubelet[2124]: I0209 02:59:29.334079 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-kernel\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334258 kubelet[2124]: I0209 02:59:29.334090 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-clustermesh-secrets\") pod \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\" (UID: \"2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4\") " Feb 9 02:59:29.334634 kubelet[2124]: I0209 02:59:29.334618 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.334669 kubelet[2124]: I0209 02:59:29.334642 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.335160 kubelet[2124]: I0209 02:59:29.334965 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.335160 kubelet[2124]: I0209 02:59:29.334982 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.335160 kubelet[2124]: W0209 02:59:29.335043 2124 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 02:59:29.336005 kubelet[2124]: I0209 02:59:29.335988 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 02:59:29.337198 systemd[1]: var-lib-kubelet-pods-2303ef9a\x2ddf4f\x2d44aa\x2da2b4\x2dbfab5d5a39f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzfnf.mount: Deactivated successfully. Feb 9 02:59:29.338603 systemd[1]: var-lib-kubelet-pods-2303ef9a\x2ddf4f\x2d44aa\x2da2b4\x2dbfab5d5a39f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 02:59:29.339171 kubelet[2124]: I0209 02:59:29.339156 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.339216 kubelet[2124]: I0209 02:59:29.339177 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.340897 systemd[1]: var-lib-kubelet-pods-2303ef9a\x2ddf4f\x2d44aa\x2da2b4\x2dbfab5d5a39f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 02:59:29.341453 kubelet[2124]: I0209 02:59:29.341436 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.341494 kubelet[2124]: I0209 02:59:29.341458 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.341494 kubelet[2124]: I0209 02:59:29.341468 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 02:59:29.341538 kubelet[2124]: I0209 02:59:29.341511 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 02:59:29.341560 kubelet[2124]: I0209 02:59:29.341538 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-kube-api-access-wzfnf" (OuterVolumeSpecName: "kube-api-access-wzfnf") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "kube-api-access-wzfnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 02:59:29.341582 kubelet[2124]: I0209 02:59:29.341564 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 02:59:29.343149 systemd[1]: var-lib-kubelet-pods-2303ef9a\x2ddf4f\x2d44aa\x2da2b4\x2dbfab5d5a39f4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 02:59:29.343533 kubelet[2124]: I0209 02:59:29.343521 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" (UID: "2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434208 2124 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434870 2124 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434905 2124 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434924 2124 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434935 2124 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-wzfnf\" (UniqueName: \"kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-kube-api-access-wzfnf\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434944 2124 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434951 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435110 kubelet[2124]: I0209 02:59:29.434959 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.434966 2124 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.434973 2124 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.434980 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.434989 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.434996 2124 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.435004 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.435464 kubelet[2124]: I0209 02:59:29.435011 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 02:59:29.820464 systemd[1]: Removed slice kubepods-burstable-pod2303ef9a_df4f_44aa_a2b4_bfab5d5a39f4.slice. Feb 9 02:59:29.875774 kubelet[2124]: E0209 02:59:29.875699 2124 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 02:59:30.164000 kubelet[2124]: I0209 02:59:30.163944 2124 scope.go:115] "RemoveContainer" containerID="de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf" Feb 9 02:59:30.165900 env[1156]: time="2024-02-09T02:59:30.165877056Z" level=info msg="RemoveContainer for \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\"" Feb 9 02:59:30.167382 env[1156]: time="2024-02-09T02:59:30.167361498Z" level=info msg="RemoveContainer for \"de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf\" returns successfully" Feb 9 02:59:30.184895 kubelet[2124]: I0209 02:59:30.184876 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 02:59:30.185066 kubelet[2124]: E0209 02:59:30.185058 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" containerName="mount-cgroup" Feb 9 02:59:30.185152 kubelet[2124]: I0209 02:59:30.185145 2124 memory_manager.go:346] "RemoveStaleState removing state" podUID="2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4" containerName="mount-cgroup" Feb 9 02:59:30.188340 systemd[1]: Created slice kubepods-burstable-poddcae37ec_64cb_4084_b480_0ec189b84f74.slice. Feb 9 02:59:30.239285 kubelet[2124]: I0209 02:59:30.239266 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-cilium-cgroup\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239456 kubelet[2124]: I0209 02:59:30.239447 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dcae37ec-64cb-4084-b480-0ec189b84f74-cilium-ipsec-secrets\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239526 kubelet[2124]: I0209 02:59:30.239519 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-host-proc-sys-kernel\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239647 kubelet[2124]: I0209 02:59:30.239639 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-bpf-maps\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239711 kubelet[2124]: I0209 02:59:30.239704 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcae37ec-64cb-4084-b480-0ec189b84f74-cilium-config-path\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239779 kubelet[2124]: I0209 02:59:30.239773 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-host-proc-sys-net\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239847 kubelet[2124]: I0209 02:59:30.239830 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-cni-path\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239903 kubelet[2124]: I0209 02:59:30.239895 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-etc-cni-netd\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.239995 kubelet[2124]: I0209 02:59:30.239988 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-lib-modules\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.240062 kubelet[2124]: I0209 02:59:30.240056 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-xtables-lock\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.240134 kubelet[2124]: I0209 02:59:30.240125 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qsgk\" (UniqueName: \"kubernetes.io/projected/dcae37ec-64cb-4084-b480-0ec189b84f74-kube-api-access-2qsgk\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.240194 kubelet[2124]: I0209 02:59:30.240181 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-cilium-run\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.240252 kubelet[2124]: I0209 02:59:30.240246 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcae37ec-64cb-4084-b480-0ec189b84f74-hostproc\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.240325 kubelet[2124]: I0209 02:59:30.240319 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcae37ec-64cb-4084-b480-0ec189b84f74-clustermesh-secrets\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.240387 kubelet[2124]: I0209 02:59:30.240375 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcae37ec-64cb-4084-b480-0ec189b84f74-hubble-tls\") pod \"cilium-zhjx7\" (UID: \"dcae37ec-64cb-4084-b480-0ec189b84f74\") " pod="kube-system/cilium-zhjx7" Feb 9 02:59:30.491015 env[1156]: time="2024-02-09T02:59:30.490906014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zhjx7,Uid:dcae37ec-64cb-4084-b480-0ec189b84f74,Namespace:kube-system,Attempt:0,}" Feb 9 02:59:30.519812 env[1156]: time="2024-02-09T02:59:30.519676945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 02:59:30.519812 env[1156]: time="2024-02-09T02:59:30.519705308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 02:59:30.519812 env[1156]: time="2024-02-09T02:59:30.519714667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 02:59:30.519985 env[1156]: time="2024-02-09T02:59:30.519823780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024 pid=4181 runtime=io.containerd.runc.v2 Feb 9 02:59:30.529846 systemd[1]: Started cri-containerd-a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024.scope. Feb 9 02:59:30.548438 env[1156]: time="2024-02-09T02:59:30.548410510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zhjx7,Uid:dcae37ec-64cb-4084-b480-0ec189b84f74,Namespace:kube-system,Attempt:0,} returns sandbox id \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\"" Feb 9 02:59:30.550743 env[1156]: time="2024-02-09T02:59:30.550722848Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 02:59:30.556177 env[1156]: time="2024-02-09T02:59:30.556143315Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc\"" Feb 9 02:59:30.556675 env[1156]: time="2024-02-09T02:59:30.556661079Z" level=info msg="StartContainer for \"16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc\"" Feb 9 02:59:30.573192 systemd[1]: Started cri-containerd-16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc.scope. Feb 9 02:59:30.598750 env[1156]: time="2024-02-09T02:59:30.598720166Z" level=info msg="StartContainer for \"16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc\" returns successfully" Feb 9 02:59:30.616347 systemd[1]: cri-containerd-16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc.scope: Deactivated successfully. Feb 9 02:59:30.632653 env[1156]: time="2024-02-09T02:59:30.632619193Z" level=info msg="shim disconnected" id=16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc Feb 9 02:59:30.632653 env[1156]: time="2024-02-09T02:59:30.632649190Z" level=warning msg="cleaning up after shim disconnected" id=16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc namespace=k8s.io Feb 9 02:59:30.632653 env[1156]: time="2024-02-09T02:59:30.632655726Z" level=info msg="cleaning up dead shim" Feb 9 02:59:30.637666 env[1156]: time="2024-02-09T02:59:30.637644007Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4262 runtime=io.containerd.runc.v2\n" Feb 9 02:59:31.168653 env[1156]: time="2024-02-09T02:59:31.168621606Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 02:59:31.174204 env[1156]: time="2024-02-09T02:59:31.173943224Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9\"" Feb 9 02:59:31.174384 env[1156]: time="2024-02-09T02:59:31.174361586Z" level=info msg="StartContainer for \"513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9\"" Feb 9 02:59:31.187883 systemd[1]: Started cri-containerd-513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9.scope. Feb 9 02:59:31.205330 env[1156]: time="2024-02-09T02:59:31.205297348Z" level=info msg="StartContainer for \"513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9\" returns successfully" Feb 9 02:59:31.216097 systemd[1]: cri-containerd-513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9.scope: Deactivated successfully. Feb 9 02:59:31.235986 env[1156]: time="2024-02-09T02:59:31.235951935Z" level=info msg="shim disconnected" id=513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9 Feb 9 02:59:31.235986 env[1156]: time="2024-02-09T02:59:31.235981795Z" level=warning msg="cleaning up after shim disconnected" id=513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9 namespace=k8s.io Feb 9 02:59:31.235986 env[1156]: time="2024-02-09T02:59:31.235988422Z" level=info msg="cleaning up dead shim" Feb 9 02:59:31.240636 env[1156]: time="2024-02-09T02:59:31.240612732Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4320 runtime=io.containerd.runc.v2\n" Feb 9 02:59:31.820105 kubelet[2124]: I0209 02:59:31.820085 2124 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4 path="/var/lib/kubelet/pods/2303ef9a-df4f-44aa-a2b4-bfab5d5a39f4/volumes" Feb 9 02:59:31.926477 kubelet[2124]: W0209 02:59:31.926435 2124 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2303ef9a_df4f_44aa_a2b4_bfab5d5a39f4.slice/cri-containerd-de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf.scope WatchSource:0}: container "de344f75a9ca8dbf9c2bbb6479d4ae2c58af5add64ecc53bd6ef33a854e4b1cf" in namespace "k8s.io": not found Feb 9 02:59:32.171072 env[1156]: time="2024-02-09T02:59:32.170818108Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 02:59:32.177323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024345085.mount: Deactivated successfully. Feb 9 02:59:32.180563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731962704.mount: Deactivated successfully. Feb 9 02:59:32.183071 env[1156]: time="2024-02-09T02:59:32.183044429Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8\"" Feb 9 02:59:32.183644 env[1156]: time="2024-02-09T02:59:32.183629918Z" level=info msg="StartContainer for \"607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8\"" Feb 9 02:59:32.196764 systemd[1]: Started cri-containerd-607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8.scope. Feb 9 02:59:32.217125 env[1156]: time="2024-02-09T02:59:32.217100810Z" level=info msg="StartContainer for \"607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8\" returns successfully" Feb 9 02:59:32.226510 systemd[1]: cri-containerd-607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8.scope: Deactivated successfully. Feb 9 02:59:32.243003 env[1156]: time="2024-02-09T02:59:32.242879880Z" level=info msg="shim disconnected" id=607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8 Feb 9 02:59:32.243182 env[1156]: time="2024-02-09T02:59:32.243170012Z" level=warning msg="cleaning up after shim disconnected" id=607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8 namespace=k8s.io Feb 9 02:59:32.243259 env[1156]: time="2024-02-09T02:59:32.243241819Z" level=info msg="cleaning up dead shim" Feb 9 02:59:32.248056 env[1156]: time="2024-02-09T02:59:32.248029404Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4379 runtime=io.containerd.runc.v2\n" Feb 9 02:59:33.171981 env[1156]: time="2024-02-09T02:59:33.171791601Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 02:59:33.178949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249692461.mount: Deactivated successfully. Feb 9 02:59:33.181996 env[1156]: time="2024-02-09T02:59:33.181903089Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34\"" Feb 9 02:59:33.182500 env[1156]: time="2024-02-09T02:59:33.182472689Z" level=info msg="StartContainer for \"5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34\"" Feb 9 02:59:33.199399 systemd[1]: Started cri-containerd-5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34.scope. Feb 9 02:59:33.215318 systemd[1]: cri-containerd-5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34.scope: Deactivated successfully. Feb 9 02:59:33.216746 env[1156]: time="2024-02-09T02:59:33.216725553Z" level=info msg="StartContainer for \"5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34\" returns successfully" Feb 9 02:59:33.230018 env[1156]: time="2024-02-09T02:59:33.229976825Z" level=info msg="shim disconnected" id=5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34 Feb 9 02:59:33.230018 env[1156]: time="2024-02-09T02:59:33.230014982Z" level=warning msg="cleaning up after shim disconnected" id=5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34 namespace=k8s.io Feb 9 02:59:33.230018 env[1156]: time="2024-02-09T02:59:33.230021726Z" level=info msg="cleaning up dead shim" Feb 9 02:59:33.235038 env[1156]: time="2024-02-09T02:59:33.235020598Z" level=warning msg="cleanup warnings time=\"2024-02-09T02:59:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4434 runtime=io.containerd.runc.v2\n" Feb 9 02:59:33.511514 kubelet[2124]: I0209 02:59:33.511497 2124 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 02:59:33.511463295 +0000 UTC m=+163.807429753 LastTransitionTime:2024-02-09 02:59:33.511463295 +0000 UTC m=+163.807429753 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 02:59:34.175039 env[1156]: time="2024-02-09T02:59:34.175012911Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 02:59:34.225668 env[1156]: time="2024-02-09T02:59:34.225635854Z" level=info msg="CreateContainer within sandbox \"a70e8beb6743c7d0762e5d5cac1b79cb4a546665cb0553e2a0cf012422b74024\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e\"" Feb 9 02:59:34.226023 env[1156]: time="2024-02-09T02:59:34.225999939Z" level=info msg="StartContainer for \"04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e\"" Feb 9 02:59:34.237418 systemd[1]: Started cri-containerd-04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e.scope. Feb 9 02:59:34.261246 env[1156]: time="2024-02-09T02:59:34.261218364Z" level=info msg="StartContainer for \"04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e\" returns successfully" Feb 9 02:59:35.034803 kubelet[2124]: W0209 02:59:35.034503 2124 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcae37ec_64cb_4084_b480_0ec189b84f74.slice/cri-containerd-16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc.scope WatchSource:0}: task 16178a81cf2bd06dded6106b9c840df5beb506e2919cb8e6961954a5d3da44fc not found: not found Feb 9 02:59:35.186404 kubelet[2124]: I0209 02:59:35.186221 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zhjx7" podStartSLOduration=5.186190887 pod.CreationTimestamp="2024-02-09 02:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 02:59:35.185742999 +0000 UTC m=+165.481709468" watchObservedRunningTime="2024-02-09 02:59:35.186190887 +0000 UTC m=+165.482157344" Feb 9 02:59:35.395935 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 02:59:37.050601 systemd[1]: run-containerd-runc-k8s.io-04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e-runc.LFvimB.mount: Deactivated successfully. Feb 9 02:59:37.108982 kubelet[2124]: E0209 02:59:37.108317 2124 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59316->127.0.0.1:37095: write tcp 127.0.0.1:59316->127.0.0.1:37095: write: connection reset by peer Feb 9 02:59:37.955694 systemd-networkd[1064]: lxc_health: Link UP Feb 9 02:59:37.971197 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 02:59:37.967905 systemd-networkd[1064]: lxc_health: Gained carrier Feb 9 02:59:38.140892 kubelet[2124]: W0209 02:59:38.140858 2124 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcae37ec_64cb_4084_b480_0ec189b84f74.slice/cri-containerd-513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9.scope WatchSource:0}: task 513be6e60f0fc57b171fada1a2dbfcf180b7911f3aacd3e9610abdc913039bb9 not found: not found Feb 9 02:59:39.838061 systemd-networkd[1064]: lxc_health: Gained IPv6LL Feb 9 02:59:41.246968 kubelet[2124]: W0209 02:59:41.246937 2124 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcae37ec_64cb_4084_b480_0ec189b84f74.slice/cri-containerd-607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8.scope WatchSource:0}: task 607049cc502ed35c1bcbc65e5e845986f96ec47fc8c139f6e2e80766a59ce3f8 not found: not found Feb 9 02:59:41.268612 systemd[1]: run-containerd-runc-k8s.io-04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e-runc.OrAAFL.mount: Deactivated successfully. Feb 9 02:59:43.342210 systemd[1]: run-containerd-runc-k8s.io-04ae686ac3d180bb4f00c15fa65577a0013043ae5d5e2e8ebf37bfff4083762e-runc.RbAHwS.mount: Deactivated successfully. Feb 9 02:59:43.378464 sshd[4064]: pam_unix(sshd:session): session closed for user core Feb 9 02:59:43.382676 systemd[1]: sshd@25-139.178.70.99:22-147.75.109.163:33168.service: Deactivated successfully. Feb 9 02:59:43.383122 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 02:59:43.383559 systemd-logind[1145]: Session 28 logged out. Waiting for processes to exit. Feb 9 02:59:43.384320 systemd-logind[1145]: Removed session 28. Feb 9 02:59:44.353258 kubelet[2124]: W0209 02:59:44.353219 2124 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcae37ec_64cb_4084_b480_0ec189b84f74.slice/cri-containerd-5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34.scope WatchSource:0}: task 5f23b45297138bfa90844ffaa0d2a3af946269c4f9352af2deadc40e95d9eb34 not found: not found