Mar 19 11:49:00.752657 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Mar 19 10:13:43 -00 2025 Mar 19 11:49:00.752674 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:49:00.752681 kernel: Disabled fast string operations Mar 19 11:49:00.752685 kernel: BIOS-provided physical RAM map: Mar 19 11:49:00.752689 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Mar 19 11:49:00.752693 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Mar 19 11:49:00.752700 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Mar 19 11:49:00.752704 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Mar 19 11:49:00.752708 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Mar 19 11:49:00.752712 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Mar 19 11:49:00.752717 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Mar 19 11:49:00.752721 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Mar 19 11:49:00.752725 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Mar 19 11:49:00.752730 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Mar 19 11:49:00.752736 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Mar 19 11:49:00.752741 kernel: NX (Execute Disable) protection: active Mar 19 11:49:00.752746 kernel: APIC: Static calls initialized Mar 19 11:49:00.752751 kernel: SMBIOS 2.7 present. Mar 19 11:49:00.752756 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Mar 19 11:49:00.752761 kernel: vmware: hypercall mode: 0x00 Mar 19 11:49:00.752766 kernel: Hypervisor detected: VMware Mar 19 11:49:00.752770 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Mar 19 11:49:00.752776 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Mar 19 11:49:00.752781 kernel: vmware: using clock offset of 4119233791 ns Mar 19 11:49:00.752786 kernel: tsc: Detected 3408.000 MHz processor Mar 19 11:49:00.752792 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 19 11:49:00.752797 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 19 11:49:00.752802 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Mar 19 11:49:00.752807 kernel: total RAM covered: 3072M Mar 19 11:49:00.752812 kernel: Found optimal setting for mtrr clean up Mar 19 11:49:00.752817 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Mar 19 11:49:00.752822 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Mar 19 11:49:00.752829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 19 11:49:00.752833 kernel: Using GB pages for direct mapping Mar 19 11:49:00.752838 kernel: ACPI: Early table checksum verification disabled Mar 19 11:49:00.752843 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Mar 19 11:49:00.752848 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Mar 19 11:49:00.752853 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Mar 19 11:49:00.752858 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Mar 19 11:49:00.752863 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Mar 19 11:49:00.752871 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Mar 19 11:49:00.752876 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Mar 19 11:49:00.752881 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Mar 19 11:49:00.752886 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Mar 19 11:49:00.752892 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Mar 19 11:49:00.752897 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Mar 19 11:49:00.752903 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Mar 19 11:49:00.752908 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Mar 19 11:49:00.752914 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Mar 19 11:49:00.752919 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Mar 19 11:49:00.752924 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Mar 19 11:49:00.752929 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Mar 19 11:49:00.752934 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Mar 19 11:49:00.752939 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Mar 19 11:49:00.752944 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Mar 19 11:49:00.752950 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Mar 19 11:49:00.752955 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Mar 19 11:49:00.752960 kernel: system APIC only can use physical flat Mar 19 11:49:00.752965 kernel: APIC: Switched APIC routing to: physical flat Mar 19 11:49:00.752970 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 19 11:49:00.752976 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 19 11:49:00.752981 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 19 11:49:00.752986 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 19 11:49:00.752991 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 19 11:49:00.752996 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 19 11:49:00.753002 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 19 11:49:00.753007 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 19 11:49:00.753012 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Mar 19 11:49:00.753017 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Mar 19 11:49:00.753022 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Mar 19 11:49:00.753027 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Mar 19 11:49:00.753032 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Mar 19 11:49:00.753037 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Mar 19 11:49:00.753042 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Mar 19 11:49:00.753047 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Mar 19 11:49:00.753053 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Mar 19 11:49:00.753058 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Mar 19 11:49:00.753064 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Mar 19 11:49:00.753068 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Mar 19 11:49:00.753074 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Mar 19 11:49:00.753079 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Mar 19 11:49:00.753083 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Mar 19 11:49:00.753088 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Mar 19 11:49:00.753093 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Mar 19 11:49:00.753098 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Mar 19 11:49:00.753105 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Mar 19 11:49:00.753110 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Mar 19 11:49:00.753115 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Mar 19 11:49:00.753120 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Mar 19 11:49:00.753125 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Mar 19 11:49:00.753130 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Mar 19 11:49:00.753135 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Mar 19 11:49:00.753140 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Mar 19 11:49:00.753145 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Mar 19 11:49:00.753150 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Mar 19 11:49:00.753156 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Mar 19 11:49:00.753161 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Mar 19 11:49:00.753166 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Mar 19 11:49:00.753171 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Mar 19 11:49:00.753176 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Mar 19 11:49:00.753181 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Mar 19 11:49:00.753186 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Mar 19 11:49:00.753191 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Mar 19 11:49:00.753196 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Mar 19 11:49:00.753201 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Mar 19 11:49:00.753206 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Mar 19 11:49:00.753212 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Mar 19 11:49:00.753217 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Mar 19 11:49:00.754895 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Mar 19 11:49:00.754901 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Mar 19 11:49:00.754906 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Mar 19 11:49:00.754911 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Mar 19 11:49:00.754916 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Mar 19 11:49:00.754921 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Mar 19 11:49:00.754926 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Mar 19 11:49:00.754931 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Mar 19 11:49:00.754939 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Mar 19 11:49:00.754944 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Mar 19 11:49:00.754953 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Mar 19 11:49:00.754960 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Mar 19 11:49:00.754965 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Mar 19 11:49:00.754970 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Mar 19 11:49:00.754976 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Mar 19 11:49:00.754981 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Mar 19 11:49:00.754986 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Mar 19 11:49:00.754993 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Mar 19 11:49:00.754998 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Mar 19 11:49:00.755003 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Mar 19 11:49:00.755009 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Mar 19 11:49:00.755014 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Mar 19 11:49:00.755019 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Mar 19 11:49:00.755025 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Mar 19 11:49:00.755030 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Mar 19 11:49:00.755036 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Mar 19 11:49:00.755041 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Mar 19 11:49:00.755048 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Mar 19 11:49:00.755053 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Mar 19 11:49:00.755058 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Mar 19 11:49:00.755064 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Mar 19 11:49:00.755069 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Mar 19 11:49:00.755074 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Mar 19 11:49:00.755080 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Mar 19 11:49:00.755085 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Mar 19 11:49:00.755090 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Mar 19 11:49:00.755096 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Mar 19 11:49:00.755102 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Mar 19 11:49:00.755107 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Mar 19 11:49:00.755113 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Mar 19 11:49:00.755118 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Mar 19 11:49:00.755123 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Mar 19 11:49:00.755128 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Mar 19 11:49:00.755134 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Mar 19 11:49:00.755139 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Mar 19 11:49:00.755144 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Mar 19 11:49:00.755150 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Mar 19 11:49:00.755156 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Mar 19 11:49:00.755161 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Mar 19 11:49:00.755167 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Mar 19 11:49:00.755179 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Mar 19 11:49:00.755184 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Mar 19 11:49:00.755190 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Mar 19 11:49:00.755195 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Mar 19 11:49:00.755201 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Mar 19 11:49:00.755206 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Mar 19 11:49:00.755211 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Mar 19 11:49:00.755217 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Mar 19 11:49:00.755232 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Mar 19 11:49:00.755238 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Mar 19 11:49:00.755243 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Mar 19 11:49:00.755248 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Mar 19 11:49:00.755254 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Mar 19 11:49:00.755259 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Mar 19 11:49:00.755264 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Mar 19 11:49:00.755269 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Mar 19 11:49:00.755275 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Mar 19 11:49:00.755281 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Mar 19 11:49:00.755288 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Mar 19 11:49:00.755293 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Mar 19 11:49:00.755298 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Mar 19 11:49:00.755304 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Mar 19 11:49:00.755309 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Mar 19 11:49:00.755315 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Mar 19 11:49:00.755320 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Mar 19 11:49:00.755325 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Mar 19 11:49:00.755331 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Mar 19 11:49:00.755337 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Mar 19 11:49:00.755343 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Mar 19 11:49:00.755348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 19 11:49:00.755354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 19 11:49:00.755359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Mar 19 11:49:00.755365 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Mar 19 11:49:00.755371 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Mar 19 11:49:00.755377 kernel: Zone ranges: Mar 19 11:49:00.755382 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 19 11:49:00.755388 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Mar 19 11:49:00.755395 kernel: Normal empty Mar 19 11:49:00.755400 kernel: Movable zone start for each node Mar 19 11:49:00.755406 kernel: Early memory node ranges Mar 19 11:49:00.755411 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Mar 19 11:49:00.755416 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Mar 19 11:49:00.755422 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Mar 19 11:49:00.755427 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Mar 19 11:49:00.755433 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 19 11:49:00.755438 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Mar 19 11:49:00.755445 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Mar 19 11:49:00.755451 kernel: ACPI: PM-Timer IO Port: 0x1008 Mar 19 11:49:00.755456 kernel: system APIC only can use physical flat Mar 19 11:49:00.755462 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Mar 19 11:49:00.755467 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Mar 19 11:49:00.755473 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Mar 19 11:49:00.755478 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Mar 19 11:49:00.755484 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Mar 19 11:49:00.755489 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Mar 19 11:49:00.755494 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Mar 19 11:49:00.755501 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Mar 19 11:49:00.755506 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Mar 19 11:49:00.755512 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Mar 19 11:49:00.755517 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Mar 19 11:49:00.755522 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Mar 19 11:49:00.755528 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Mar 19 11:49:00.755533 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Mar 19 11:49:00.755538 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Mar 19 11:49:00.755544 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Mar 19 11:49:00.755549 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Mar 19 11:49:00.755557 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Mar 19 11:49:00.755562 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Mar 19 11:49:00.755568 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Mar 19 11:49:00.755573 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Mar 19 11:49:00.755578 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Mar 19 11:49:00.755584 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Mar 19 11:49:00.755589 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Mar 19 11:49:00.755595 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Mar 19 11:49:00.755600 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Mar 19 11:49:00.755605 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Mar 19 11:49:00.755612 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Mar 19 11:49:00.755617 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Mar 19 11:49:00.755623 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Mar 19 11:49:00.755628 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Mar 19 11:49:00.755633 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Mar 19 11:49:00.755639 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Mar 19 11:49:00.755644 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Mar 19 11:49:00.755649 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Mar 19 11:49:00.755655 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Mar 19 11:49:00.755661 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Mar 19 11:49:00.755666 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Mar 19 11:49:00.755672 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Mar 19 11:49:00.755677 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Mar 19 11:49:00.755683 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Mar 19 11:49:00.755688 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Mar 19 11:49:00.755694 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Mar 19 11:49:00.755699 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Mar 19 11:49:00.755704 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Mar 19 11:49:00.755709 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Mar 19 11:49:00.755716 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Mar 19 11:49:00.755721 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Mar 19 11:49:00.755727 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Mar 19 11:49:00.755732 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Mar 19 11:49:00.755737 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Mar 19 11:49:00.755743 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Mar 19 11:49:00.755748 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Mar 19 11:49:00.755753 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Mar 19 11:49:00.755759 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Mar 19 11:49:00.755764 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Mar 19 11:49:00.755771 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Mar 19 11:49:00.755776 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Mar 19 11:49:00.755782 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Mar 19 11:49:00.755787 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Mar 19 11:49:00.755792 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Mar 19 11:49:00.755798 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Mar 19 11:49:00.755803 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Mar 19 11:49:00.755808 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Mar 19 11:49:00.755814 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Mar 19 11:49:00.755821 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Mar 19 11:49:00.755826 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Mar 19 11:49:00.755832 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Mar 19 11:49:00.755837 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Mar 19 11:49:00.755842 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Mar 19 11:49:00.755848 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Mar 19 11:49:00.755853 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Mar 19 11:49:00.755859 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Mar 19 11:49:00.755864 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Mar 19 11:49:00.755869 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Mar 19 11:49:00.755876 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Mar 19 11:49:00.755881 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Mar 19 11:49:00.755887 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Mar 19 11:49:00.755892 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Mar 19 11:49:00.755898 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Mar 19 11:49:00.755903 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Mar 19 11:49:00.755909 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Mar 19 11:49:00.755914 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Mar 19 11:49:00.755919 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Mar 19 11:49:00.755925 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Mar 19 11:49:00.755931 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Mar 19 11:49:00.755936 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Mar 19 11:49:00.755942 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Mar 19 11:49:00.755947 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Mar 19 11:49:00.755952 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Mar 19 11:49:00.755958 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Mar 19 11:49:00.755963 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Mar 19 11:49:00.755969 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Mar 19 11:49:00.755974 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Mar 19 11:49:00.755979 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Mar 19 11:49:00.755986 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Mar 19 11:49:00.755991 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Mar 19 11:49:00.755997 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Mar 19 11:49:00.756002 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Mar 19 11:49:00.756007 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Mar 19 11:49:00.756013 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Mar 19 11:49:00.756018 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Mar 19 11:49:00.756023 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Mar 19 11:49:00.756029 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Mar 19 11:49:00.756034 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Mar 19 11:49:00.756040 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Mar 19 11:49:00.756046 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Mar 19 11:49:00.756051 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Mar 19 11:49:00.756057 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Mar 19 11:49:00.756062 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Mar 19 11:49:00.756067 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Mar 19 11:49:00.756072 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Mar 19 11:49:00.756078 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Mar 19 11:49:00.756083 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Mar 19 11:49:00.756089 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Mar 19 11:49:00.756095 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Mar 19 11:49:00.756100 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Mar 19 11:49:00.756105 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Mar 19 11:49:00.756111 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Mar 19 11:49:00.756116 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Mar 19 11:49:00.756121 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Mar 19 11:49:00.756127 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Mar 19 11:49:00.756132 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Mar 19 11:49:00.756137 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Mar 19 11:49:00.756144 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Mar 19 11:49:00.756149 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Mar 19 11:49:00.756155 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Mar 19 11:49:00.756160 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Mar 19 11:49:00.756165 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Mar 19 11:49:00.756171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Mar 19 11:49:00.756176 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 19 11:49:00.756182 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Mar 19 11:49:00.756187 kernel: TSC deadline timer available Mar 19 11:49:00.756193 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Mar 19 11:49:00.756200 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Mar 19 11:49:00.756205 kernel: Booting paravirtualized kernel on VMware hypervisor Mar 19 11:49:00.756211 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 19 11:49:00.756216 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Mar 19 11:49:00.756266 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Mar 19 11:49:00.756272 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Mar 19 11:49:00.756277 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Mar 19 11:49:00.756283 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Mar 19 11:49:00.756288 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Mar 19 11:49:00.756295 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Mar 19 11:49:00.756301 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Mar 19 11:49:00.756313 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Mar 19 11:49:00.756320 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Mar 19 11:49:00.756326 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Mar 19 11:49:00.756331 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Mar 19 11:49:00.756337 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Mar 19 11:49:00.756343 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Mar 19 11:49:00.756350 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Mar 19 11:49:00.756355 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Mar 19 11:49:00.756361 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Mar 19 11:49:00.756367 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Mar 19 11:49:00.756372 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Mar 19 11:49:00.756379 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:49:00.756385 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:49:00.756391 kernel: random: crng init done Mar 19 11:49:00.756397 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Mar 19 11:49:00.756403 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Mar 19 11:49:00.756410 kernel: printk: log_buf_len min size: 262144 bytes Mar 19 11:49:00.756416 kernel: printk: log_buf_len: 1048576 bytes Mar 19 11:49:00.756422 kernel: printk: early log buf free: 239648(91%) Mar 19 11:49:00.756428 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:49:00.756434 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 19 11:49:00.756440 kernel: Fallback order for Node 0: 0 Mar 19 11:49:00.759230 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Mar 19 11:49:00.759246 kernel: Policy zone: DMA32 Mar 19 11:49:00.759252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:49:00.759259 kernel: Memory: 1934352K/2096628K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43480K init, 1592K bss, 162016K reserved, 0K cma-reserved) Mar 19 11:49:00.759266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Mar 19 11:49:00.759272 kernel: ftrace: allocating 37910 entries in 149 pages Mar 19 11:49:00.759278 kernel: ftrace: allocated 149 pages with 4 groups Mar 19 11:49:00.759285 kernel: Dynamic Preempt: voluntary Mar 19 11:49:00.759291 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:49:00.759298 kernel: rcu: RCU event tracing is enabled. Mar 19 11:49:00.759304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Mar 19 11:49:00.759309 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:49:00.759315 kernel: Rude variant of Tasks RCU enabled. Mar 19 11:49:00.759322 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:49:00.759327 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:49:00.759333 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Mar 19 11:49:00.759340 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Mar 19 11:49:00.759346 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Mar 19 11:49:00.759352 kernel: Console: colour VGA+ 80x25 Mar 19 11:49:00.759358 kernel: printk: console [tty0] enabled Mar 19 11:49:00.759364 kernel: printk: console [ttyS0] enabled Mar 19 11:49:00.759371 kernel: ACPI: Core revision 20230628 Mar 19 11:49:00.759377 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Mar 19 11:49:00.759382 kernel: APIC: Switch to symmetric I/O mode setup Mar 19 11:49:00.759388 kernel: x2apic enabled Mar 19 11:49:00.759394 kernel: APIC: Switched APIC routing to: physical x2apic Mar 19 11:49:00.759401 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 19 11:49:00.759407 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Mar 19 11:49:00.759413 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Mar 19 11:49:00.759419 kernel: Disabled fast string operations Mar 19 11:49:00.759425 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 19 11:49:00.759431 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 19 11:49:00.759437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 19 11:49:00.759443 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 19 11:49:00.759449 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 19 11:49:00.759456 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 19 11:49:00.759462 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 19 11:49:00.759468 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Mar 19 11:49:00.759474 kernel: RETBleed: Mitigation: Enhanced IBRS Mar 19 11:49:00.759480 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 19 11:49:00.759486 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 19 11:49:00.759492 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 19 11:49:00.759497 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 19 11:49:00.759504 kernel: GDS: Unknown: Dependent on hypervisor status Mar 19 11:49:00.759510 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 19 11:49:00.759516 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 19 11:49:00.759522 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 19 11:49:00.759528 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 19 11:49:00.759534 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 19 11:49:00.759540 kernel: Freeing SMP alternatives memory: 32K Mar 19 11:49:00.759545 kernel: pid_max: default: 131072 minimum: 1024 Mar 19 11:49:00.759551 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:49:00.759559 kernel: landlock: Up and running. Mar 19 11:49:00.759564 kernel: SELinux: Initializing. Mar 19 11:49:00.759570 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 11:49:00.759576 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 19 11:49:00.759582 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Mar 19 11:49:00.759588 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Mar 19 11:49:00.759594 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Mar 19 11:49:00.759600 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Mar 19 11:49:00.759606 kernel: Performance Events: Skylake events, core PMU driver. Mar 19 11:49:00.759614 kernel: core: CPUID marked event: 'cpu cycles' unavailable Mar 19 11:49:00.759620 kernel: core: CPUID marked event: 'instructions' unavailable Mar 19 11:49:00.759627 kernel: core: CPUID marked event: 'bus cycles' unavailable Mar 19 11:49:00.759636 kernel: core: CPUID marked event: 'cache references' unavailable Mar 19 11:49:00.759646 kernel: core: CPUID marked event: 'cache misses' unavailable Mar 19 11:49:00.759655 kernel: core: CPUID marked event: 'branch instructions' unavailable Mar 19 11:49:00.759662 kernel: core: CPUID marked event: 'branch misses' unavailable Mar 19 11:49:00.759668 kernel: ... version: 1 Mar 19 11:49:00.759676 kernel: ... bit width: 48 Mar 19 11:49:00.759682 kernel: ... generic registers: 4 Mar 19 11:49:00.759688 kernel: ... value mask: 0000ffffffffffff Mar 19 11:49:00.759694 kernel: ... max period: 000000007fffffff Mar 19 11:49:00.759700 kernel: ... fixed-purpose events: 0 Mar 19 11:49:00.759705 kernel: ... event mask: 000000000000000f Mar 19 11:49:00.759711 kernel: signal: max sigframe size: 1776 Mar 19 11:49:00.759717 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:49:00.759723 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:49:00.759729 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 19 11:49:00.759736 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:49:00.759742 kernel: smpboot: x86: Booting SMP configuration: Mar 19 11:49:00.759748 kernel: .... node #0, CPUs: #1 Mar 19 11:49:00.759754 kernel: Disabled fast string operations Mar 19 11:49:00.759759 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Mar 19 11:49:00.759767 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 19 11:49:00.759776 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:49:00.759786 kernel: smpboot: Max logical packages: 128 Mar 19 11:49:00.759792 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Mar 19 11:49:00.759799 kernel: devtmpfs: initialized Mar 19 11:49:00.759806 kernel: x86/mm: Memory block size: 128MB Mar 19 11:49:00.759812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Mar 19 11:49:00.759818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:49:00.759824 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Mar 19 11:49:00.759830 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:49:00.759836 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:49:00.759842 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:49:00.759848 kernel: audit: type=2000 audit(1742384939.070:1): state=initialized audit_enabled=0 res=1 Mar 19 11:49:00.759855 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:49:00.759861 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 19 11:49:00.759867 kernel: cpuidle: using governor menu Mar 19 11:49:00.759873 kernel: Simple Boot Flag at 0x36 set to 0x80 Mar 19 11:49:00.759878 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:49:00.759884 kernel: dca service started, version 1.12.1 Mar 19 11:49:00.759890 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Mar 19 11:49:00.759896 kernel: PCI: Using configuration type 1 for base access Mar 19 11:49:00.759902 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 19 11:49:00.759909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:49:00.759915 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:49:00.759921 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:49:00.759927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:49:00.759933 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:49:00.759939 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:49:00.759945 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:49:00.759951 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:49:00.759957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:49:00.759964 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Mar 19 11:49:00.759970 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 19 11:49:00.759975 kernel: ACPI: Interpreter enabled Mar 19 11:49:00.759981 kernel: ACPI: PM: (supports S0 S1 S5) Mar 19 11:49:00.759987 kernel: ACPI: Using IOAPIC for interrupt routing Mar 19 11:49:00.759993 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 19 11:49:00.759999 kernel: PCI: Using E820 reservations for host bridge windows Mar 19 11:49:00.760005 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Mar 19 11:49:00.760011 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Mar 19 11:49:00.760102 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:49:00.760158 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Mar 19 11:49:00.760208 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Mar 19 11:49:00.760217 kernel: PCI host bridge to bus 0000:00 Mar 19 11:49:00.760290 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 19 11:49:00.760337 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Mar 19 11:49:00.760385 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 19 11:49:00.760430 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 19 11:49:00.760475 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Mar 19 11:49:00.760519 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Mar 19 11:49:00.760580 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Mar 19 11:49:00.760640 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Mar 19 11:49:00.760698 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Mar 19 11:49:00.760763 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Mar 19 11:49:00.760824 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Mar 19 11:49:00.760875 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 19 11:49:00.760926 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 19 11:49:00.760975 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 19 11:49:00.761026 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 19 11:49:00.761084 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Mar 19 11:49:00.761136 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Mar 19 11:49:00.761187 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Mar 19 11:49:00.763270 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Mar 19 11:49:00.763345 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Mar 19 11:49:00.763400 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Mar 19 11:49:00.763457 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Mar 19 11:49:00.763514 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Mar 19 11:49:00.763565 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Mar 19 11:49:00.763616 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Mar 19 11:49:00.763666 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Mar 19 11:49:00.763717 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 19 11:49:00.763771 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Mar 19 11:49:00.763829 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.763881 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.763936 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.763989 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.764044 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.764095 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.764152 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.764206 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766289 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.766353 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766412 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.766465 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766521 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.766576 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766632 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.766684 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766738 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.766790 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766848 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.766900 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.766955 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767008 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767066 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767119 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767175 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767310 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767375 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767442 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767498 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767549 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767603 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767658 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767713 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767765 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767818 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767870 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.767924 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.767980 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.768036 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.768087 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.768143 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.768194 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.768258 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.768314 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.768369 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.768421 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.768477 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.768529 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770276 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.770340 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770399 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.770453 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770511 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.770564 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770619 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.770672 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770732 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.770786 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770840 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.770894 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.770949 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.771002 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.771060 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Mar 19 11:49:00.771113 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.771168 kernel: pci_bus 0000:01: extended config space not accessible Mar 19 11:49:00.771227 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 19 11:49:00.771283 kernel: pci_bus 0000:02: extended config space not accessible Mar 19 11:49:00.771292 kernel: acpiphp: Slot [32] registered Mar 19 11:49:00.771306 kernel: acpiphp: Slot [33] registered Mar 19 11:49:00.771315 kernel: acpiphp: Slot [34] registered Mar 19 11:49:00.771321 kernel: acpiphp: Slot [35] registered Mar 19 11:49:00.771327 kernel: acpiphp: Slot [36] registered Mar 19 11:49:00.771333 kernel: acpiphp: Slot [37] registered Mar 19 11:49:00.771338 kernel: acpiphp: Slot [38] registered Mar 19 11:49:00.771344 kernel: acpiphp: Slot [39] registered Mar 19 11:49:00.771350 kernel: acpiphp: Slot [40] registered Mar 19 11:49:00.771356 kernel: acpiphp: Slot [41] registered Mar 19 11:49:00.771362 kernel: acpiphp: Slot [42] registered Mar 19 11:49:00.771369 kernel: acpiphp: Slot [43] registered Mar 19 11:49:00.771375 kernel: acpiphp: Slot [44] registered Mar 19 11:49:00.771381 kernel: acpiphp: Slot [45] registered Mar 19 11:49:00.771386 kernel: acpiphp: Slot [46] registered Mar 19 11:49:00.771392 kernel: acpiphp: Slot [47] registered Mar 19 11:49:00.771398 kernel: acpiphp: Slot [48] registered Mar 19 11:49:00.771403 kernel: acpiphp: Slot [49] registered Mar 19 11:49:00.771409 kernel: acpiphp: Slot [50] registered Mar 19 11:49:00.771415 kernel: acpiphp: Slot [51] registered Mar 19 11:49:00.771422 kernel: acpiphp: Slot [52] registered Mar 19 11:49:00.771428 kernel: acpiphp: Slot [53] registered Mar 19 11:49:00.771434 kernel: acpiphp: Slot [54] registered Mar 19 11:49:00.771440 kernel: acpiphp: Slot [55] registered Mar 19 11:49:00.771447 kernel: acpiphp: Slot [56] registered Mar 19 11:49:00.771456 kernel: acpiphp: Slot [57] registered Mar 19 11:49:00.771466 kernel: acpiphp: Slot [58] registered Mar 19 11:49:00.771476 kernel: acpiphp: Slot [59] registered Mar 19 11:49:00.771482 kernel: acpiphp: Slot [60] registered Mar 19 11:49:00.771487 kernel: acpiphp: Slot [61] registered Mar 19 11:49:00.771495 kernel: acpiphp: Slot [62] registered Mar 19 11:49:00.771500 kernel: acpiphp: Slot [63] registered Mar 19 11:49:00.771558 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Mar 19 11:49:00.771623 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Mar 19 11:49:00.771677 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Mar 19 11:49:00.771728 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 19 11:49:00.771779 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Mar 19 11:49:00.771830 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Mar 19 11:49:00.771884 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Mar 19 11:49:00.771934 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Mar 19 11:49:00.771985 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Mar 19 11:49:00.772043 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Mar 19 11:49:00.772096 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Mar 19 11:49:00.772153 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Mar 19 11:49:00.772217 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Mar 19 11:49:00.775640 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Mar 19 11:49:00.775698 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Mar 19 11:49:00.775753 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Mar 19 11:49:00.775806 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Mar 19 11:49:00.775857 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Mar 19 11:49:00.775911 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Mar 19 11:49:00.775964 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Mar 19 11:49:00.776015 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Mar 19 11:49:00.776070 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Mar 19 11:49:00.776122 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Mar 19 11:49:00.776174 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Mar 19 11:49:00.776331 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Mar 19 11:49:00.776387 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Mar 19 11:49:00.776441 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Mar 19 11:49:00.776492 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Mar 19 11:49:00.776546 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Mar 19 11:49:00.776598 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Mar 19 11:49:00.776648 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Mar 19 11:49:00.776699 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 19 11:49:00.776755 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Mar 19 11:49:00.776806 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Mar 19 11:49:00.776857 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Mar 19 11:49:00.776910 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Mar 19 11:49:00.776963 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Mar 19 11:49:00.777013 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Mar 19 11:49:00.777066 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Mar 19 11:49:00.777118 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Mar 19 11:49:00.777170 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Mar 19 11:49:00.777238 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Mar 19 11:49:00.777293 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Mar 19 11:49:00.777358 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Mar 19 11:49:00.777411 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Mar 19 11:49:00.777463 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Mar 19 11:49:00.777516 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Mar 19 11:49:00.777568 kernel: pci 0000:0b:00.0: supports D1 D2 Mar 19 11:49:00.777623 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 19 11:49:00.777675 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Mar 19 11:49:00.777728 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Mar 19 11:49:00.777780 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Mar 19 11:49:00.777831 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Mar 19 11:49:00.777884 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Mar 19 11:49:00.777935 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Mar 19 11:49:00.777986 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Mar 19 11:49:00.778040 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Mar 19 11:49:00.778093 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Mar 19 11:49:00.778144 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Mar 19 11:49:00.778195 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Mar 19 11:49:00.779611 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Mar 19 11:49:00.779673 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Mar 19 11:49:00.779728 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Mar 19 11:49:00.779784 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 19 11:49:00.779837 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Mar 19 11:49:00.779889 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Mar 19 11:49:00.779940 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 19 11:49:00.779993 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Mar 19 11:49:00.780044 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Mar 19 11:49:00.780098 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Mar 19 11:49:00.780156 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Mar 19 11:49:00.780211 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Mar 19 11:49:00.780277 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Mar 19 11:49:00.780331 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Mar 19 11:49:00.780383 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Mar 19 11:49:00.780434 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 19 11:49:00.780487 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Mar 19 11:49:00.780538 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Mar 19 11:49:00.780589 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Mar 19 11:49:00.780644 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 19 11:49:00.780699 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Mar 19 11:49:00.780750 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Mar 19 11:49:00.780801 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Mar 19 11:49:00.780853 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Mar 19 11:49:00.780906 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Mar 19 11:49:00.780958 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Mar 19 11:49:00.781008 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Mar 19 11:49:00.781063 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Mar 19 11:49:00.781116 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Mar 19 11:49:00.781167 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Mar 19 11:49:00.784065 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 19 11:49:00.784144 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Mar 19 11:49:00.784199 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Mar 19 11:49:00.784320 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 19 11:49:00.784381 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Mar 19 11:49:00.784434 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Mar 19 11:49:00.784485 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Mar 19 11:49:00.784538 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Mar 19 11:49:00.784588 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Mar 19 11:49:00.784639 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Mar 19 11:49:00.784691 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Mar 19 11:49:00.784743 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Mar 19 11:49:00.784794 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 19 11:49:00.784849 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Mar 19 11:49:00.784901 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Mar 19 11:49:00.784951 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Mar 19 11:49:00.785002 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Mar 19 11:49:00.785055 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Mar 19 11:49:00.785105 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Mar 19 11:49:00.785155 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Mar 19 11:49:00.785210 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Mar 19 11:49:00.785270 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Mar 19 11:49:00.785322 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Mar 19 11:49:00.785372 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Mar 19 11:49:00.785425 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Mar 19 11:49:00.785475 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Mar 19 11:49:00.785527 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 19 11:49:00.785580 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Mar 19 11:49:00.785633 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Mar 19 11:49:00.785684 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Mar 19 11:49:00.785736 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Mar 19 11:49:00.785787 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Mar 19 11:49:00.785838 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Mar 19 11:49:00.785891 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Mar 19 11:49:00.785942 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Mar 19 11:49:00.785992 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Mar 19 11:49:00.786048 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Mar 19 11:49:00.786099 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Mar 19 11:49:00.786149 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 19 11:49:00.786158 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Mar 19 11:49:00.786164 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Mar 19 11:49:00.786170 kernel: ACPI: PCI: Interrupt link LNKB disabled Mar 19 11:49:00.786176 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 19 11:49:00.786182 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Mar 19 11:49:00.786188 kernel: iommu: Default domain type: Translated Mar 19 11:49:00.786196 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 19 11:49:00.786202 kernel: PCI: Using ACPI for IRQ routing Mar 19 11:49:00.786208 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 19 11:49:00.786214 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Mar 19 11:49:00.788512 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Mar 19 11:49:00.788589 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Mar 19 11:49:00.788647 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Mar 19 11:49:00.788701 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 19 11:49:00.788710 kernel: vgaarb: loaded Mar 19 11:49:00.788724 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Mar 19 11:49:00.788733 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Mar 19 11:49:00.788742 kernel: clocksource: Switched to clocksource tsc-early Mar 19 11:49:00.788750 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:49:00.788760 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:49:00.788767 kernel: pnp: PnP ACPI init Mar 19 11:49:00.788834 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Mar 19 11:49:00.788885 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Mar 19 11:49:00.788954 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Mar 19 11:49:00.789017 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Mar 19 11:49:00.789069 kernel: pnp 00:06: [dma 2] Mar 19 11:49:00.789125 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Mar 19 11:49:00.789173 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Mar 19 11:49:00.789513 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Mar 19 11:49:00.789528 kernel: pnp: PnP ACPI: found 8 devices Mar 19 11:49:00.789535 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 19 11:49:00.789541 kernel: NET: Registered PF_INET protocol family Mar 19 11:49:00.789547 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:49:00.789553 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 19 11:49:00.789560 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:49:00.789566 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 19 11:49:00.789572 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 19 11:49:00.789577 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 19 11:49:00.789585 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 11:49:00.789591 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 19 11:49:00.789597 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:49:00.789603 kernel: NET: Registered PF_XDP protocol family Mar 19 11:49:00.789670 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Mar 19 11:49:00.789729 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 19 11:49:00.789785 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 19 11:49:00.789863 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 19 11:49:00.789940 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 19 11:49:00.790021 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Mar 19 11:49:00.790126 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Mar 19 11:49:00.790211 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Mar 19 11:49:00.790364 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Mar 19 11:49:00.790425 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Mar 19 11:49:00.790480 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Mar 19 11:49:00.790533 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Mar 19 11:49:00.790587 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Mar 19 11:49:00.790640 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Mar 19 11:49:00.790693 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Mar 19 11:49:00.790756 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Mar 19 11:49:00.790809 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Mar 19 11:49:00.790862 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Mar 19 11:49:00.790915 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Mar 19 11:49:00.790966 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Mar 19 11:49:00.791035 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Mar 19 11:49:00.791363 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Mar 19 11:49:00.791426 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Mar 19 11:49:00.791482 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Mar 19 11:49:00.791536 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Mar 19 11:49:00.791612 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.791892 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.791955 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792013 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792087 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792165 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792292 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792365 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792447 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792512 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792567 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792624 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792677 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792729 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792782 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792835 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792887 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.792938 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.792991 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793046 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793335 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793390 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793444 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793496 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793550 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793601 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793655 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793713 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793767 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793820 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793873 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.793924 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.793977 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.794029 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.794101 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.794159 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.794213 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.794339 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.794393 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.794445 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.794498 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.794550 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795297 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.795365 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795422 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.795481 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795535 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.795587 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795640 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.795691 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795743 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.795794 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795845 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.795900 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.795951 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796003 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796056 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796108 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796161 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796212 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796284 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796338 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796395 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796447 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796499 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796551 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796603 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796654 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796706 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796757 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796809 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796861 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.796915 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.796967 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.797019 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.797069 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.797121 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.797174 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.798246 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.798310 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.798366 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.798423 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.798476 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Mar 19 11:49:00.798528 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Mar 19 11:49:00.798580 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 19 11:49:00.798633 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Mar 19 11:49:00.798684 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Mar 19 11:49:00.798734 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Mar 19 11:49:00.798786 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 19 11:49:00.798844 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Mar 19 11:49:00.798897 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Mar 19 11:49:00.798948 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Mar 19 11:49:00.798999 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Mar 19 11:49:00.799051 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Mar 19 11:49:00.799104 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Mar 19 11:49:00.799156 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Mar 19 11:49:00.799207 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Mar 19 11:49:00.799742 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Mar 19 11:49:00.799805 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Mar 19 11:49:00.799859 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Mar 19 11:49:00.799912 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Mar 19 11:49:00.799963 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Mar 19 11:49:00.800013 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Mar 19 11:49:00.800064 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Mar 19 11:49:00.800151 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Mar 19 11:49:00.800203 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Mar 19 11:49:00.800575 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Mar 19 11:49:00.800631 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 19 11:49:00.800690 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Mar 19 11:49:00.800743 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Mar 19 11:49:00.800795 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Mar 19 11:49:00.800847 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Mar 19 11:49:00.800899 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Mar 19 11:49:00.800950 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Mar 19 11:49:00.801004 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Mar 19 11:49:00.801056 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Mar 19 11:49:00.801108 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Mar 19 11:49:00.801163 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Mar 19 11:49:00.801265 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Mar 19 11:49:00.801333 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Mar 19 11:49:00.801387 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Mar 19 11:49:00.801438 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Mar 19 11:49:00.801492 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Mar 19 11:49:00.801547 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Mar 19 11:49:00.801599 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Mar 19 11:49:00.801650 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Mar 19 11:49:00.801702 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Mar 19 11:49:00.801753 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Mar 19 11:49:00.801804 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Mar 19 11:49:00.801855 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Mar 19 11:49:00.801906 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Mar 19 11:49:00.801958 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Mar 19 11:49:00.802011 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 19 11:49:00.802063 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Mar 19 11:49:00.802114 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Mar 19 11:49:00.802165 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 19 11:49:00.802216 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Mar 19 11:49:00.802282 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Mar 19 11:49:00.802334 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Mar 19 11:49:00.802385 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Mar 19 11:49:00.802436 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Mar 19 11:49:00.802486 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Mar 19 11:49:00.802541 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Mar 19 11:49:00.802593 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Mar 19 11:49:00.802644 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 19 11:49:00.802697 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Mar 19 11:49:00.802748 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Mar 19 11:49:00.802798 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Mar 19 11:49:00.802850 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 19 11:49:00.802902 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Mar 19 11:49:00.802954 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Mar 19 11:49:00.803008 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Mar 19 11:49:00.803058 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Mar 19 11:49:00.803111 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Mar 19 11:49:00.803162 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Mar 19 11:49:00.803212 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Mar 19 11:49:00.803512 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Mar 19 11:49:00.803568 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Mar 19 11:49:00.803620 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Mar 19 11:49:00.803673 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 19 11:49:00.803725 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Mar 19 11:49:00.803780 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Mar 19 11:49:00.803831 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 19 11:49:00.803883 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Mar 19 11:49:00.803934 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Mar 19 11:49:00.803985 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Mar 19 11:49:00.804036 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Mar 19 11:49:00.804087 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Mar 19 11:49:00.804138 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Mar 19 11:49:00.804188 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Mar 19 11:49:00.804255 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Mar 19 11:49:00.804313 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 19 11:49:00.804366 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Mar 19 11:49:00.804418 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Mar 19 11:49:00.804469 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Mar 19 11:49:00.804521 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Mar 19 11:49:00.804579 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Mar 19 11:49:00.804632 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Mar 19 11:49:00.804684 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Mar 19 11:49:00.804736 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Mar 19 11:49:00.806311 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Mar 19 11:49:00.806373 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Mar 19 11:49:00.806429 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Mar 19 11:49:00.806483 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Mar 19 11:49:00.806535 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Mar 19 11:49:00.806588 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 19 11:49:00.806640 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Mar 19 11:49:00.806703 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Mar 19 11:49:00.806757 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Mar 19 11:49:00.806814 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Mar 19 11:49:00.806865 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Mar 19 11:49:00.806916 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Mar 19 11:49:00.806969 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Mar 19 11:49:00.807022 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Mar 19 11:49:00.807073 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Mar 19 11:49:00.807124 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Mar 19 11:49:00.807175 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Mar 19 11:49:00.807261 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 19 11:49:00.807321 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Mar 19 11:49:00.807371 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Mar 19 11:49:00.807416 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Mar 19 11:49:00.807461 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Mar 19 11:49:00.807517 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Mar 19 11:49:00.807570 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Mar 19 11:49:00.807617 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Mar 19 11:49:00.807663 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 19 11:49:00.807713 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Mar 19 11:49:00.807760 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Mar 19 11:49:00.807806 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Mar 19 11:49:00.807852 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Mar 19 11:49:00.807898 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Mar 19 11:49:00.807949 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Mar 19 11:49:00.807997 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Mar 19 11:49:00.808046 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Mar 19 11:49:00.808100 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Mar 19 11:49:00.808149 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Mar 19 11:49:00.808195 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Mar 19 11:49:00.808782 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Mar 19 11:49:00.808838 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Mar 19 11:49:00.808887 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Mar 19 11:49:00.808943 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Mar 19 11:49:00.808991 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Mar 19 11:49:00.809044 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Mar 19 11:49:00.809093 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 19 11:49:00.809143 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Mar 19 11:49:00.809191 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Mar 19 11:49:00.809277 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Mar 19 11:49:00.809325 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Mar 19 11:49:00.809378 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Mar 19 11:49:00.809434 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Mar 19 11:49:00.809490 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Mar 19 11:49:00.809541 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Mar 19 11:49:00.809590 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Mar 19 11:49:00.809642 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Mar 19 11:49:00.809689 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Mar 19 11:49:00.809736 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Mar 19 11:49:00.809789 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Mar 19 11:49:00.809838 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Mar 19 11:49:00.809885 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Mar 19 11:49:00.809938 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Mar 19 11:49:00.809987 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 19 11:49:00.810038 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Mar 19 11:49:00.810086 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 19 11:49:00.810137 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Mar 19 11:49:00.810185 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Mar 19 11:49:00.810258 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Mar 19 11:49:00.810318 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Mar 19 11:49:00.810374 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Mar 19 11:49:00.810423 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 19 11:49:00.810474 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Mar 19 11:49:00.810522 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Mar 19 11:49:00.810573 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 19 11:49:00.810625 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Mar 19 11:49:00.810673 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Mar 19 11:49:00.810720 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Mar 19 11:49:00.810772 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Mar 19 11:49:00.810820 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Mar 19 11:49:00.810868 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Mar 19 11:49:00.810924 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Mar 19 11:49:00.810972 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 19 11:49:00.811024 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Mar 19 11:49:00.811072 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 19 11:49:00.811124 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Mar 19 11:49:00.811172 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Mar 19 11:49:00.811265 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Mar 19 11:49:00.811321 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Mar 19 11:49:00.811373 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Mar 19 11:49:00.811422 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 19 11:49:00.811476 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Mar 19 11:49:00.811525 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Mar 19 11:49:00.811574 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Mar 19 11:49:00.811626 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Mar 19 11:49:00.811674 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Mar 19 11:49:00.811720 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Mar 19 11:49:00.811772 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Mar 19 11:49:00.811820 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Mar 19 11:49:00.811873 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Mar 19 11:49:00.811925 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 19 11:49:00.811977 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Mar 19 11:49:00.812025 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Mar 19 11:49:00.812077 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Mar 19 11:49:00.812126 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Mar 19 11:49:00.812177 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Mar 19 11:49:00.812463 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Mar 19 11:49:00.812522 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Mar 19 11:49:00.812571 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 19 11:49:00.812631 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 19 11:49:00.812641 kernel: PCI: CLS 32 bytes, default 64 Mar 19 11:49:00.812648 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 19 11:49:00.812656 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Mar 19 11:49:00.812664 kernel: clocksource: Switched to clocksource tsc Mar 19 11:49:00.812671 kernel: Initialise system trusted keyrings Mar 19 11:49:00.812679 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 19 11:49:00.812685 kernel: Key type asymmetric registered Mar 19 11:49:00.812692 kernel: Asymmetric key parser 'x509' registered Mar 19 11:49:00.812698 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 19 11:49:00.812704 kernel: io scheduler mq-deadline registered Mar 19 11:49:00.812711 kernel: io scheduler kyber registered Mar 19 11:49:00.812717 kernel: io scheduler bfq registered Mar 19 11:49:00.812774 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Mar 19 11:49:00.812829 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.812886 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Mar 19 11:49:00.812939 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.812993 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Mar 19 11:49:00.813046 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813100 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Mar 19 11:49:00.813156 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813211 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Mar 19 11:49:00.813369 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813423 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Mar 19 11:49:00.813476 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813534 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Mar 19 11:49:00.813586 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813641 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Mar 19 11:49:00.813695 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813748 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Mar 19 11:49:00.813801 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813857 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Mar 19 11:49:00.813910 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.813965 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Mar 19 11:49:00.814017 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.814072 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Mar 19 11:49:00.814125 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.814180 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Mar 19 11:49:00.814577 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.814642 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Mar 19 11:49:00.814699 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.814753 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Mar 19 11:49:00.814812 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.814872 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Mar 19 11:49:00.814927 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.814982 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Mar 19 11:49:00.815036 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.815091 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Mar 19 11:49:00.815143 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.815200 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Mar 19 11:49:00.815528 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.815589 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Mar 19 11:49:00.815645 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.815700 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Mar 19 11:49:00.815753 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.815812 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Mar 19 11:49:00.815865 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.815919 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Mar 19 11:49:00.815971 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816025 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Mar 19 11:49:00.816078 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816134 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Mar 19 11:49:00.816186 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816306 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Mar 19 11:49:00.816359 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816412 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Mar 19 11:49:00.816464 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816521 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Mar 19 11:49:00.816573 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816626 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Mar 19 11:49:00.816678 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816731 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Mar 19 11:49:00.816787 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816840 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Mar 19 11:49:00.816892 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.816946 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Mar 19 11:49:00.816998 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 19 11:49:00.817009 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 19 11:49:00.817016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:49:00.817023 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 19 11:49:00.817029 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Mar 19 11:49:00.817035 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 19 11:49:00.817042 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 19 11:49:00.817096 kernel: rtc_cmos 00:01: registered as rtc0 Mar 19 11:49:00.817144 kernel: rtc_cmos 00:01: setting system clock to 2025-03-19T11:49:00 UTC (1742384940) Mar 19 11:49:00.817191 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Mar 19 11:49:00.817202 kernel: intel_pstate: CPU model not supported Mar 19 11:49:00.817209 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 19 11:49:00.817215 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:49:00.817478 kernel: Segment Routing with IPv6 Mar 19 11:49:00.817485 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:49:00.817492 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:49:00.817499 kernel: Key type dns_resolver registered Mar 19 11:49:00.817505 kernel: IPI shorthand broadcast: enabled Mar 19 11:49:00.817511 kernel: sched_clock: Marking stable (896004053, 228844813)->(1188850592, -64001726) Mar 19 11:49:00.817520 kernel: registered taskstats version 1 Mar 19 11:49:00.817526 kernel: Loading compiled-in X.509 certificates Mar 19 11:49:00.817533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ea8d6696bd19c98b32173a761210456cdad6b56b' Mar 19 11:49:00.817539 kernel: Key type .fscrypt registered Mar 19 11:49:00.817546 kernel: Key type fscrypt-provisioning registered Mar 19 11:49:00.817551 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:49:00.817558 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:49:00.817564 kernel: ima: No architecture policies found Mar 19 11:49:00.817572 kernel: clk: Disabling unused clocks Mar 19 11:49:00.817578 kernel: Freeing unused kernel image (initmem) memory: 43480K Mar 19 11:49:00.817585 kernel: Write protecting the kernel read-only data: 38912k Mar 19 11:49:00.817591 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 19 11:49:00.817597 kernel: Run /init as init process Mar 19 11:49:00.817604 kernel: with arguments: Mar 19 11:49:00.817611 kernel: /init Mar 19 11:49:00.817617 kernel: with environment: Mar 19 11:49:00.817623 kernel: HOME=/ Mar 19 11:49:00.817629 kernel: TERM=linux Mar 19 11:49:00.817636 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:49:00.817643 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:49:00.817652 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:49:00.817659 systemd[1]: Detected virtualization vmware. Mar 19 11:49:00.817665 systemd[1]: Detected architecture x86-64. Mar 19 11:49:00.817671 systemd[1]: Running in initrd. Mar 19 11:49:00.817678 systemd[1]: No hostname configured, using default hostname. Mar 19 11:49:00.817697 systemd[1]: Hostname set to . Mar 19 11:49:00.817704 systemd[1]: Initializing machine ID from random generator. Mar 19 11:49:00.817710 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:49:00.817717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:49:00.817724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:49:00.817731 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:49:00.817737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:49:00.817744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:49:00.817770 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:49:00.817787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:49:00.817795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:49:00.817801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:49:00.817808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:49:00.817815 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:49:00.817821 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:49:00.817830 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:49:00.817837 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:49:00.817844 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:49:00.817851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:49:00.817858 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:49:00.817865 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:49:00.817872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:49:00.817878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:49:00.817886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:49:00.817893 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:49:00.817899 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:49:00.817906 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:49:00.817912 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:49:00.817919 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:49:00.817925 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:49:00.817932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:49:00.817938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:49:00.817946 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:49:00.817953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:49:00.817960 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:49:00.817967 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:49:00.817992 systemd-journald[217]: Collecting audit messages is disabled. Mar 19 11:49:00.818010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:49:00.818017 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:49:00.818023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:49:00.818031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:49:00.818038 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:49:00.818045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:49:00.818052 kernel: Bridge firewalling registered Mar 19 11:49:00.818058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:49:00.818065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:49:00.818072 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:49:00.818079 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:49:00.818085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:49:00.818094 systemd-journald[217]: Journal started Mar 19 11:49:00.818109 systemd-journald[217]: Runtime Journal (/run/log/journal/01c0159cd4f64050a913ee0d53cb396a) is 4.8M, max 38.6M, 33.8M free. Mar 19 11:49:00.758456 systemd-modules-load[218]: Inserted module 'overlay' Mar 19 11:49:00.791091 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 19 11:49:00.818732 dracut-cmdline[237]: dracut-dracut-053 Mar 19 11:49:00.818732 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:49:00.822321 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:49:00.827317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:49:00.832324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:49:00.833705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:49:00.855997 systemd-resolved[286]: Positive Trust Anchors: Mar 19 11:49:00.856249 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:49:00.856416 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:49:00.858669 systemd-resolved[286]: Defaulting to hostname 'linux'. Mar 19 11:49:00.859278 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:49:00.859415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:49:00.866243 kernel: SCSI subsystem initialized Mar 19 11:49:00.872230 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:49:00.878233 kernel: iscsi: registered transport (tcp) Mar 19 11:49:00.891406 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:49:00.891447 kernel: QLogic iSCSI HBA Driver Mar 19 11:49:00.911362 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:49:00.915320 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:49:00.929436 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:49:00.929473 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:49:00.930577 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:49:00.962266 kernel: raid6: avx2x4 gen() 47050 MB/s Mar 19 11:49:00.978266 kernel: raid6: avx2x2 gen() 52794 MB/s Mar 19 11:49:00.995491 kernel: raid6: avx2x1 gen() 44408 MB/s Mar 19 11:49:00.995510 kernel: raid6: using algorithm avx2x2 gen() 52794 MB/s Mar 19 11:49:01.013480 kernel: raid6: .... xor() 31801 MB/s, rmw enabled Mar 19 11:49:01.013505 kernel: raid6: using avx2x2 recovery algorithm Mar 19 11:49:01.026232 kernel: xor: automatically using best checksumming function avx Mar 19 11:49:01.115239 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:49:01.120256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:49:01.124324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:49:01.132087 systemd-udevd[434]: Using default interface naming scheme 'v255'. Mar 19 11:49:01.134995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:49:01.149349 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:49:01.157789 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Mar 19 11:49:01.173351 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:49:01.177353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:49:01.246744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:49:01.253468 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:49:01.263446 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:49:01.264562 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:49:01.265050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:49:01.265354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:49:01.267424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:49:01.276683 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:49:01.323241 kernel: VMware PVSCSI driver - version 1.0.7.0-k Mar 19 11:49:01.326243 kernel: vmw_pvscsi: using 64bit dma Mar 19 11:49:01.331322 kernel: vmw_pvscsi: max_id: 16 Mar 19 11:49:01.331347 kernel: vmw_pvscsi: setting ring_pages to 8 Mar 19 11:49:01.336230 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Mar 19 11:49:01.338283 kernel: vmw_pvscsi: enabling reqCallThreshold Mar 19 11:49:01.338303 kernel: vmw_pvscsi: driver-based request coalescing enabled Mar 19 11:49:01.338312 kernel: vmw_pvscsi: using MSI-X Mar 19 11:49:01.341642 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Mar 19 11:49:01.341746 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Mar 19 11:49:01.344413 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Mar 19 11:49:01.349326 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Mar 19 11:49:01.356752 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Mar 19 11:49:01.356839 kernel: cryptd: max_cpu_qlen set to 1000 Mar 19 11:49:01.364234 kernel: libata version 3.00 loaded. Mar 19 11:49:01.366231 kernel: ata_piix 0000:00:07.1: version 2.13 Mar 19 11:49:01.370756 kernel: scsi host1: ata_piix Mar 19 11:49:01.370838 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Mar 19 11:49:01.370913 kernel: scsi host2: ata_piix Mar 19 11:49:01.370989 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Mar 19 11:49:01.371003 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Mar 19 11:49:01.370153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:49:01.373808 kernel: AVX2 version of gcm_enc/dec engaged. Mar 19 11:49:01.373827 kernel: AES CTR mode by8 optimization enabled Mar 19 11:49:01.370248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:49:01.370486 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:49:01.373694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:49:01.373797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:49:01.373969 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:49:01.380469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:49:01.393984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:49:01.399312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:49:01.409882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:49:01.537240 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Mar 19 11:49:01.541231 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Mar 19 11:49:01.554232 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Mar 19 11:49:01.558409 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 19 11:49:01.558493 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Mar 19 11:49:01.558563 kernel: sd 0:0:0:0: [sda] Cache data unavailable Mar 19 11:49:01.558624 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Mar 19 11:49:01.558685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:49:01.558699 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 19 11:49:01.558760 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Mar 19 11:49:01.580333 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 11:49:01.580345 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 19 11:49:01.614444 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (480) Mar 19 11:49:01.619230 kernel: BTRFS: device fsid 8d57424d-5abc-4888-810f-658d040a58e4 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (491) Mar 19 11:49:01.620391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Mar 19 11:49:01.626783 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Mar 19 11:49:01.631121 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Mar 19 11:49:01.631268 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Mar 19 11:49:01.636804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Mar 19 11:49:01.641314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:49:01.671236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:49:02.681236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 19 11:49:02.681286 disk-uuid[593]: The operation has completed successfully. Mar 19 11:49:02.720445 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:49:02.720682 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:49:02.744374 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:49:02.746079 sh[609]: Success Mar 19 11:49:02.754251 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 19 11:49:02.797929 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:49:02.803467 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:49:02.805394 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:49:02.821235 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57424d-5abc-4888-810f-658d040a58e4 Mar 19 11:49:02.821271 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:49:02.821280 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:49:02.821957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:49:02.823420 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:49:02.830234 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 19 11:49:02.832521 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:49:02.837318 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Mar 19 11:49:02.838511 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:49:02.854337 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:49:02.854372 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:49:02.854381 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:49:02.857346 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 19 11:49:02.863230 kernel: BTRFS info (device sda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:49:02.863319 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:49:02.867557 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:49:02.871566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:49:02.914654 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Mar 19 11:49:02.921327 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:49:02.957422 ignition[670]: Ignition 2.20.0 Mar 19 11:49:02.957428 ignition[670]: Stage: fetch-offline Mar 19 11:49:02.957447 ignition[670]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:49:02.957452 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 19 11:49:02.957499 ignition[670]: parsed url from cmdline: "" Mar 19 11:49:02.957501 ignition[670]: no config URL provided Mar 19 11:49:02.957504 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:49:02.957508 ignition[670]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:49:02.957867 ignition[670]: config successfully fetched Mar 19 11:49:02.957883 ignition[670]: parsing config with SHA512: c89e21805781ec27a9bfe9f811188a2f4c7d146e01d73e5ddf9119ce7790013150d80cdf923f525f2007a3fc11199d652c8b5a727d3675ca22472d1fa5fd10ce Mar 19 11:49:02.960729 unknown[670]: fetched base config from "system" Mar 19 11:49:02.960735 unknown[670]: fetched user config from "vmware" Mar 19 11:49:02.961113 ignition[670]: fetch-offline: fetch-offline passed Mar 19 11:49:02.961151 ignition[670]: Ignition finished successfully Mar 19 11:49:02.961943 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:49:02.978490 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:49:02.982314 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:49:02.997405 systemd-networkd[803]: lo: Link UP Mar 19 11:49:02.997411 systemd-networkd[803]: lo: Gained carrier Mar 19 11:49:02.998162 systemd-networkd[803]: Enumeration completed Mar 19 11:49:02.998357 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:49:02.998416 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Mar 19 11:49:02.998501 systemd[1]: Reached target network.target - Network. Mar 19 11:49:02.998585 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 19 11:49:03.002021 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Mar 19 11:49:03.002138 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Mar 19 11:49:03.001817 systemd-networkd[803]: ens192: Link UP Mar 19 11:49:03.001819 systemd-networkd[803]: ens192: Gained carrier Mar 19 11:49:03.003337 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:49:03.012455 ignition[806]: Ignition 2.20.0 Mar 19 11:49:03.012463 ignition[806]: Stage: kargs Mar 19 11:49:03.012624 ignition[806]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:49:03.012631 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 19 11:49:03.013293 ignition[806]: kargs: kargs passed Mar 19 11:49:03.013320 ignition[806]: Ignition finished successfully Mar 19 11:49:03.014204 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:49:03.019361 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:49:03.026636 ignition[813]: Ignition 2.20.0 Mar 19 11:49:03.026646 ignition[813]: Stage: disks Mar 19 11:49:03.026749 ignition[813]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:49:03.026755 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 19 11:49:03.027326 ignition[813]: disks: disks passed Mar 19 11:49:03.027358 ignition[813]: Ignition finished successfully Mar 19 11:49:03.027989 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:49:03.028473 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:49:03.028590 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:49:03.028701 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:49:03.028796 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:49:03.028891 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:49:03.030099 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:49:03.041097 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 19 11:49:03.041994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:49:03.822276 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:49:03.878085 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:49:03.878338 kernel: EXT4-fs (sda9): mounted filesystem 303a73dd-e104-408b-9302-bf91b04ba1ca r/w with ordered data mode. Quota mode: none. Mar 19 11:49:03.878482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:49:03.885298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:49:03.886739 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:49:03.887056 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:49:03.887094 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:49:03.887111 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:49:03.890540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:49:03.891469 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:49:03.894239 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (830) Mar 19 11:49:03.898256 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:49:03.898276 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:49:03.898290 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:49:03.904260 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 19 11:49:03.904069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:49:03.928958 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:49:03.932312 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:49:03.935400 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:49:03.937873 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:49:04.006228 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:49:04.010407 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:49:04.012744 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:49:04.016259 kernel: BTRFS info (device sda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:49:04.027312 ignition[942]: INFO : Ignition 2.20.0 Mar 19 11:49:04.027312 ignition[942]: INFO : Stage: mount Mar 19 11:49:04.027312 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:49:04.027312 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 19 11:49:04.028554 ignition[942]: INFO : mount: mount passed Mar 19 11:49:04.028695 ignition[942]: INFO : Ignition finished successfully Mar 19 11:49:04.029813 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:49:04.034329 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:49:04.036275 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:49:04.666350 systemd-networkd[803]: ens192: Gained IPv6LL Mar 19 11:49:04.819418 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:49:04.824410 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:49:04.832251 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (954) Mar 19 11:49:04.835755 kernel: BTRFS info (device sda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:49:04.835798 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:49:04.835807 kernel: BTRFS info (device sda6): using free space tree Mar 19 11:49:04.840236 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 19 11:49:04.840640 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:49:04.854786 ignition[971]: INFO : Ignition 2.20.0 Mar 19 11:49:04.854786 ignition[971]: INFO : Stage: files Mar 19 11:49:04.855196 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:49:04.855196 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 19 11:49:04.855556 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:49:04.856837 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:49:04.856837 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:49:04.859073 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:49:04.859324 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:49:04.859454 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:49:04.859373 unknown[971]: wrote ssh authorized keys file for user: core Mar 19 11:49:04.863171 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 19 11:49:04.863406 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 19 11:49:04.893229 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:49:04.994913 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 19 11:49:04.994913 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:49:04.995367 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 19 11:49:05.474236 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:49:05.511273 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:49:05.511524 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:49:05.511524 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:49:05.511524 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:49:05.511524 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:49:05.511524 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:49:05.512258 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 19 11:49:05.800350 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:49:05.923126 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 19 11:49:05.924140 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Mar 19 11:49:05.924140 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Mar 19 11:49:05.924140 ignition[971]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Mar 19 11:49:05.951648 ignition[971]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:49:05.954170 ignition[971]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:49:05.954359 ignition[971]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Mar 19 11:49:05.954359 ignition[971]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:49:05.954359 ignition[971]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:49:05.954751 ignition[971]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:49:05.954751 ignition[971]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:49:05.954751 ignition[971]: INFO : files: files passed Mar 19 11:49:05.954751 ignition[971]: INFO : Ignition finished successfully Mar 19 11:49:05.955100 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:49:05.959378 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:49:05.961079 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:49:05.961745 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:49:05.961928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:49:05.967201 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:49:05.967201 initrd-setup-root-after-ignition[1001]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:49:05.968411 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:49:05.969062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:49:05.969485 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:49:05.972485 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:49:05.990696 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:49:05.990759 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:49:05.991050 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:49:05.991175 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:49:05.991415 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:49:05.991892 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:49:06.001591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:49:06.005319 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:49:06.011995 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:49:06.012186 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:49:06.012438 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:49:06.012648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:49:06.012734 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:49:06.013138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:49:06.013529 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:49:06.013699 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:49:06.013893 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:49:06.014088 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:49:06.014307 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:49:06.014521 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:49:06.014708 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:49:06.014924 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:49:06.015105 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:49:06.015274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:49:06.015352 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:49:06.015713 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:49:06.015877 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:49:06.016055 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:49:06.016100 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:49:06.016277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:49:06.016341 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:49:06.016633 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:49:06.016702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:49:06.016959 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:49:06.017105 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:49:06.019246 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:49:06.019474 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:49:06.019686 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:49:06.019878 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:49:06.019935 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:49:06.020138 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:49:06.020182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:49:06.020487 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:49:06.020604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:49:06.020853 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:49:06.020946 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:49:06.025423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:49:06.027381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:49:06.027495 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:49:06.027609 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:49:06.027888 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:49:06.027951 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:49:06.031339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:49:06.031394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:49:06.036122 ignition[1026]: INFO : Ignition 2.20.0 Mar 19 11:49:06.036541 ignition[1026]: INFO : Stage: umount Mar 19 11:49:06.036541 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:49:06.036845 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 19 11:49:06.037666 ignition[1026]: INFO : umount: umount passed Mar 19 11:49:06.037828 ignition[1026]: INFO : Ignition finished successfully Mar 19 11:49:06.038423 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:49:06.038515 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:49:06.038947 systemd[1]: Stopped target network.target - Network. Mar 19 11:49:06.039043 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:49:06.039075 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:49:06.039202 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:49:06.039233 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:49:06.039365 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:49:06.039388 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:49:06.039534 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:49:06.039578 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:49:06.040381 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:49:06.040650 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:49:06.042081 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:49:06.042263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:49:06.044007 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:49:06.044443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:49:06.044593 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:49:06.045683 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:49:06.051843 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:49:06.051934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:49:06.052903 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:49:06.053030 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:49:06.053048 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:49:06.060369 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:49:06.060535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:49:06.060579 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:49:06.061495 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Mar 19 11:49:06.061536 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Mar 19 11:49:06.061915 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:49:06.061941 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:49:06.062201 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:49:06.062233 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:49:06.062347 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:49:06.063203 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:49:06.068582 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:49:06.068652 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:49:06.072586 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:49:06.072660 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:49:06.072987 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:49:06.073015 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:49:06.073397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:49:06.073414 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:49:06.073564 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:49:06.073591 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:49:06.073858 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:49:06.073883 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:49:06.074171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:49:06.074195 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:49:06.074937 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:49:06.075034 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:49:06.075060 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:49:06.075227 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:49:06.075255 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:49:06.075363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:49:06.075385 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:49:06.075489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:49:06.075510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:49:06.079336 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:49:06.079395 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:49:06.079422 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:49:06.080500 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:49:06.080562 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:49:06.334841 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:49:06.334909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:49:06.335274 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:49:06.335396 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:49:06.335426 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:49:06.338565 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:49:06.356429 systemd[1]: Switching root. Mar 19 11:49:06.392097 systemd-journald[217]: Journal stopped Mar 19 11:49:08.137797 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 19 11:49:08.137821 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:49:08.137829 kernel: SELinux: policy capability open_perms=1 Mar 19 11:49:08.137835 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:49:08.137840 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:49:08.137846 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:49:08.137853 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:49:08.137859 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:49:08.137865 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:49:08.137871 systemd[1]: Successfully loaded SELinux policy in 31.664ms. Mar 19 11:49:08.137878 kernel: audit: type=1403 audit(1742384947.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:49:08.137884 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.268ms. Mar 19 11:49:08.137891 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:49:08.137899 systemd[1]: Detected virtualization vmware. Mar 19 11:49:08.137906 systemd[1]: Detected architecture x86-64. Mar 19 11:49:08.137913 systemd[1]: Detected first boot. Mar 19 11:49:08.137919 systemd[1]: Initializing machine ID from random generator. Mar 19 11:49:08.137927 zram_generator::config[1071]: No configuration found. Mar 19 11:49:08.138014 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Mar 19 11:49:08.138024 kernel: Guest personality initialized and is active Mar 19 11:49:08.138031 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 19 11:49:08.138037 kernel: Initialized host personality Mar 19 11:49:08.138042 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:49:08.138049 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:49:08.138058 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 19 11:49:08.138065 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Mar 19 11:49:08.138072 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:49:08.138079 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:49:08.138085 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:49:08.138091 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:49:08.138100 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:49:08.138107 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:49:08.138115 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:49:08.138122 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:49:08.138129 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:49:08.138135 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:49:08.138142 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:49:08.138149 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:49:08.138157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:49:08.138164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:49:08.138173 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:49:08.138180 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:49:08.138186 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:49:08.138194 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:49:08.138200 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 11:49:08.138207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:49:08.138216 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:49:08.139244 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:49:08.139255 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:49:08.139264 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:49:08.139271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:49:08.139277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:49:08.139284 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:49:08.139291 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:49:08.139304 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:49:08.139312 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:49:08.139319 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:49:08.139326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:49:08.139333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:49:08.139342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:49:08.139349 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:49:08.139356 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:49:08.139363 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:49:08.139370 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:49:08.139377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:49:08.139383 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:49:08.139390 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:49:08.139398 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:49:08.139406 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:49:08.139413 systemd[1]: Reached target machines.target - Containers. Mar 19 11:49:08.139421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:49:08.139428 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Mar 19 11:49:08.139435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:49:08.139442 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:49:08.139449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:49:08.139457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:49:08.139464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:49:08.139471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:49:08.139478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:49:08.139485 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:49:08.139492 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:49:08.139499 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:49:08.139505 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:49:08.139512 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:49:08.139521 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:49:08.139528 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:49:08.139535 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:49:08.139542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:49:08.139549 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:49:08.139556 kernel: fuse: init (API version 7.39) Mar 19 11:49:08.139563 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:49:08.139570 kernel: loop: module loaded Mar 19 11:49:08.139578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:49:08.139585 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:49:08.139592 systemd[1]: Stopped verity-setup.service. Mar 19 11:49:08.139599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:49:08.139606 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:49:08.139613 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:49:08.139620 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:49:08.139627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:49:08.139634 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:49:08.139642 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:49:08.139662 systemd-journald[1157]: Collecting audit messages is disabled. Mar 19 11:49:08.139680 systemd-journald[1157]: Journal started Mar 19 11:49:08.139696 systemd-journald[1157]: Runtime Journal (/run/log/journal/bc41a0a6a9a943a9ba4440f58aba612a) is 4.8M, max 38.6M, 33.8M free. Mar 19 11:49:08.143075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:49:08.143099 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:49:07.933559 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:49:07.945447 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 19 11:49:07.945686 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:49:08.143693 jq[1141]: true Mar 19 11:49:08.159778 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:49:08.159821 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:49:08.146208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:49:08.146335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:49:08.146560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:49:08.146653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:49:08.147404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:49:08.147503 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:49:08.147717 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:49:08.147804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:49:08.148209 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:49:08.148624 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:49:08.149411 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:49:08.158834 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:49:08.159687 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:49:08.161393 jq[1177]: true Mar 19 11:49:08.174816 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:49:08.177161 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:49:08.177290 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:49:08.177316 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:49:08.178122 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:49:08.179300 kernel: ACPI: bus type drm_connector registered Mar 19 11:49:08.181478 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:49:08.183437 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:49:08.183595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:49:08.185364 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:49:08.189282 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:49:08.189419 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:49:08.190184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:49:08.190319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:49:08.192306 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:49:08.193545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:49:08.200593 systemd-journald[1157]: Time spent on flushing to /var/log/journal/bc41a0a6a9a943a9ba4440f58aba612a is 38.148ms for 1848 entries. Mar 19 11:49:08.200593 systemd-journald[1157]: System Journal (/var/log/journal/bc41a0a6a9a943a9ba4440f58aba612a) is 8M, max 584.8M, 576.8M free. Mar 19 11:49:08.268439 systemd-journald[1157]: Received client request to flush runtime journal. Mar 19 11:49:08.268475 kernel: loop0: detected capacity change from 0 to 147912 Mar 19 11:49:08.196311 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:49:08.199266 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:49:08.199544 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:49:08.199671 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:49:08.200418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:49:08.203048 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:49:08.203725 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:49:08.207514 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:49:08.208661 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:49:08.216408 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:49:08.251978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:49:08.271962 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:49:08.295742 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:49:08.317768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:49:08.321376 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:49:08.323747 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 19 11:49:08.323900 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 19 11:49:08.332141 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:49:08.341236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:49:08.345167 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:49:08.346604 udevadm[1237]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:49:08.358694 ignition[1215]: Ignition 2.20.0 Mar 19 11:49:08.358931 ignition[1215]: deleting config from guestinfo properties Mar 19 11:49:08.362679 ignition[1215]: Successfully deleted config Mar 19 11:49:08.365460 kernel: loop1: detected capacity change from 0 to 2960 Mar 19 11:49:08.367316 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Mar 19 11:49:08.375491 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:49:08.380413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:49:08.389312 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 19 11:49:08.389323 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Mar 19 11:49:08.392017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:49:08.416234 kernel: loop2: detected capacity change from 0 to 210664 Mar 19 11:49:08.665256 kernel: loop3: detected capacity change from 0 to 138176 Mar 19 11:49:08.718261 kernel: loop4: detected capacity change from 0 to 147912 Mar 19 11:49:08.743286 kernel: loop5: detected capacity change from 0 to 2960 Mar 19 11:49:08.757252 kernel: loop6: detected capacity change from 0 to 210664 Mar 19 11:49:08.789345 kernel: loop7: detected capacity change from 0 to 138176 Mar 19 11:49:08.877152 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Mar 19 11:49:08.877448 (sd-merge)[1252]: Merged extensions into '/usr'. Mar 19 11:49:08.888227 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:49:08.888325 systemd[1]: Reloading... Mar 19 11:49:08.917236 zram_generator::config[1276]: No configuration found. Mar 19 11:49:08.999212 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 19 11:49:09.019302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:49:09.068139 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:49:09.068393 systemd[1]: Reloading finished in 179 ms. Mar 19 11:49:09.080962 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:49:09.081340 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:49:09.087136 systemd[1]: Starting ensure-sysext.service... Mar 19 11:49:09.089313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:49:09.092166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:49:09.110662 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:49:09.110671 systemd[1]: Reloading... Mar 19 11:49:09.114209 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Mar 19 11:49:09.121412 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:49:09.121576 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:49:09.122050 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:49:09.122209 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Mar 19 11:49:09.122262 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Mar 19 11:49:09.139754 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:49:09.139761 systemd-tmpfiles[1337]: Skipping /boot Mar 19 11:49:09.151236 zram_generator::config[1368]: No configuration found. Mar 19 11:49:09.151682 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:49:09.151688 systemd-tmpfiles[1337]: Skipping /boot Mar 19 11:49:09.215272 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 19 11:49:09.233893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:49:09.272355 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:49:09.283036 systemd[1]: Reloading finished in 172 ms. Mar 19 11:49:09.292541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:49:09.292919 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:49:09.293202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:49:09.320211 systemd[1]: Finished ensure-sysext.service. Mar 19 11:49:09.320904 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 11:49:09.321092 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:49:09.327441 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:49:09.329291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:49:09.333597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:49:09.336325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:49:09.338795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:49:09.340342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:49:09.340543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:49:09.340571 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:49:09.343322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:49:09.346329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:49:09.350123 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:49:09.354358 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:49:09.356322 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:49:09.357796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:49:09.361369 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:49:09.366653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:49:09.367050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:49:09.370711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:49:09.371318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:49:09.372438 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:49:09.389429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:49:09.390792 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:49:09.390902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:49:09.406263 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:49:09.406389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:49:09.406648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:49:09.410229 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1423) Mar 19 11:49:09.430062 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:49:09.431408 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:49:09.438211 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:49:09.454241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 19 11:49:09.461477 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:49:09.468696 augenrules[1500]: No rules Mar 19 11:49:09.470113 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:49:09.470337 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:49:09.481422 kernel: ACPI: button: Power Button [PWRF] Mar 19 11:49:09.518642 systemd-networkd[1453]: lo: Link UP Mar 19 11:49:09.518648 systemd-networkd[1453]: lo: Gained carrier Mar 19 11:49:09.519658 systemd-networkd[1453]: Enumeration completed Mar 19 11:49:09.519713 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:49:09.524337 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:49:09.524908 systemd-networkd[1453]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Mar 19 11:49:09.526358 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Mar 19 11:49:09.526486 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Mar 19 11:49:09.529409 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:49:09.533717 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:49:09.533869 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:49:09.535035 systemd-networkd[1453]: ens192: Link UP Mar 19 11:49:09.535128 systemd-networkd[1453]: ens192: Gained carrier Mar 19 11:49:09.540294 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Mar 19 11:49:09.542774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Mar 19 11:49:09.549424 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:49:09.549785 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:49:09.550289 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:49:09.550983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:49:09.555013 systemd-resolved[1457]: Positive Trust Anchors: Mar 19 11:49:09.555483 systemd-resolved[1457]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:49:09.555542 systemd-resolved[1457]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:49:09.559455 systemd-resolved[1457]: Defaulting to hostname 'linux'. Mar 19 11:49:09.560705 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:49:09.560929 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:49:09.561533 systemd[1]: Reached target network.target - Network. Mar 19 11:49:09.561740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:49:09.587260 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Mar 19 11:49:09.608262 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Mar 19 11:50:26.315629 systemd-timesyncd[1459]: Contacted time server 23.111.186.186:123 (0.flatcar.pool.ntp.org). Mar 19 11:50:26.315662 systemd-timesyncd[1459]: Initial clock synchronization to Wed 2025-03-19 11:50:26.315575 UTC. Mar 19 11:50:26.315689 systemd-resolved[1457]: Clock change detected. Flushing caches. Mar 19 11:50:26.322975 (udev-worker)[1425]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Mar 19 11:50:26.332877 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:50:26.337015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:50:26.346979 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:50:26.351980 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:50:26.360087 lvm[1522]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:50:26.380821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:50:26.386463 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:50:26.386689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:50:26.386819 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:50:26.386987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:50:26.387120 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:50:26.387320 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:50:26.387479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:50:26.387600 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:50:26.387718 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:50:26.387738 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:50:26.387971 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:50:26.390789 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:50:26.391795 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:50:26.393330 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:50:26.393564 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:50:26.393709 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:50:26.395903 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:50:26.396235 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:50:26.397143 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:50:26.397578 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:50:26.397763 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:50:26.397897 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:50:26.398049 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:50:26.398065 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:50:26.400814 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:50:26.402066 lvm[1529]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:50:26.403857 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:50:26.405545 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:50:26.411855 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:50:26.412099 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:50:26.414022 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:50:26.416829 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:50:26.418641 jq[1532]: false Mar 19 11:50:26.418472 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:50:26.424892 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:50:26.430685 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:50:26.431418 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:50:26.432188 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:50:26.434356 extend-filesystems[1533]: Found loop4 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found loop5 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found loop6 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found loop7 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda1 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda2 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda3 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found usr Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda4 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda6 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda7 Mar 19 11:50:26.443786 extend-filesystems[1533]: Found sda9 Mar 19 11:50:26.443786 extend-filesystems[1533]: Checking size of /dev/sda9 Mar 19 11:50:26.437801 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:50:26.439184 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:50:26.445842 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Mar 19 11:50:26.447128 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:50:26.448516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:50:26.449790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:50:26.451184 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:50:26.451467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:50:26.457620 dbus-daemon[1531]: [system] SELinux support is enabled Mar 19 11:50:26.457701 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:50:26.460849 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:50:26.460868 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:50:26.461037 jq[1549]: true Mar 19 11:50:26.461184 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:50:26.461197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:50:26.465406 extend-filesystems[1533]: Old size kept for /dev/sda9 Mar 19 11:50:26.465406 extend-filesystems[1533]: Found sr0 Mar 19 11:50:26.466267 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:50:26.469974 update_engine[1548]: I20250319 11:50:26.469485 1548 main.cc:92] Flatcar Update Engine starting Mar 19 11:50:26.467792 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:50:26.468110 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:50:26.468212 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:50:26.475567 jq[1563]: true Mar 19 11:50:26.479915 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Mar 19 11:50:26.480554 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:50:26.480706 update_engine[1548]: I20250319 11:50:26.480576 1548 update_check_scheduler.cc:74] Next update check in 10m30s Mar 19 11:50:26.483940 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:50:26.488833 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Mar 19 11:50:26.490302 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:50:26.492647 tar[1555]: linux-amd64/helm Mar 19 11:50:26.514866 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Mar 19 11:50:26.525396 unknown[1573]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Mar 19 11:50:26.529338 unknown[1573]: Core dump limit set to -1 Mar 19 11:50:26.539779 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1422) Mar 19 11:50:26.543153 systemd-logind[1539]: Watching system buttons on /dev/input/event1 (Power Button) Mar 19 11:50:26.543168 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:50:26.547513 systemd-logind[1539]: New seat seat0. Mar 19 11:50:26.548504 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:50:26.609873 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:50:26.611677 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:50:26.612261 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:50:26.656845 locksmithd[1577]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:50:26.728133 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:50:26.767529 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:50:26.775970 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:50:26.786322 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:50:26.786460 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:50:26.791959 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:50:26.801279 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:50:26.810466 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:50:26.811997 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 11:50:26.812190 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:50:26.812963 containerd[1569]: time="2025-03-19T11:50:26.812925964Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:50:26.831425 containerd[1569]: time="2025-03-19T11:50:26.831295210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.832459 containerd[1569]: time="2025-03-19T11:50:26.832430517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:26.832637 containerd[1569]: time="2025-03-19T11:50:26.832514589Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:50:26.832637 containerd[1569]: time="2025-03-19T11:50:26.832526814Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:50:26.832685 containerd[1569]: time="2025-03-19T11:50:26.832676386Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:50:26.832718 containerd[1569]: time="2025-03-19T11:50:26.832711987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.832820 containerd[1569]: time="2025-03-19T11:50:26.832807068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:26.832868 containerd[1569]: time="2025-03-19T11:50:26.832860159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833051 containerd[1569]: time="2025-03-19T11:50:26.833042131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833087 containerd[1569]: time="2025-03-19T11:50:26.833080209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833122 containerd[1569]: time="2025-03-19T11:50:26.833114725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833298 containerd[1569]: time="2025-03-19T11:50:26.833156671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833298 containerd[1569]: time="2025-03-19T11:50:26.833200707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833412 containerd[1569]: time="2025-03-19T11:50:26.833384293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833537 containerd[1569]: time="2025-03-19T11:50:26.833526874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:50:26.833569 containerd[1569]: time="2025-03-19T11:50:26.833563470Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:50:26.833681 containerd[1569]: time="2025-03-19T11:50:26.833653152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:50:26.833736 containerd[1569]: time="2025-03-19T11:50:26.833728889Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:50:26.835449 containerd[1569]: time="2025-03-19T11:50:26.835438700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:50:26.835768 containerd[1569]: time="2025-03-19T11:50:26.835515790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:50:26.835768 containerd[1569]: time="2025-03-19T11:50:26.835529872Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:50:26.835768 containerd[1569]: time="2025-03-19T11:50:26.835539140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:50:26.835768 containerd[1569]: time="2025-03-19T11:50:26.835548122Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:50:26.835768 containerd[1569]: time="2025-03-19T11:50:26.835616484Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:50:26.835768 containerd[1569]: time="2025-03-19T11:50:26.835734021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:50:26.835975 containerd[1569]: time="2025-03-19T11:50:26.835965507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:50:26.836016 containerd[1569]: time="2025-03-19T11:50:26.836009097Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:50:26.836052 containerd[1569]: time="2025-03-19T11:50:26.836045352Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836099547Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836112430Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836119604Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836126887Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836135750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836142944Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836149516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836155354Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836169033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836177386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836213 containerd[1569]: time="2025-03-19T11:50:26.836184986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836398 containerd[1569]: time="2025-03-19T11:50:26.836390150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836433 containerd[1569]: time="2025-03-19T11:50:26.836427047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836464 containerd[1569]: time="2025-03-19T11:50:26.836458641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836491171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836500238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836507617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836516668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836523373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836530013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836536763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836546133Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836557270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836565135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836575175Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836601057Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836610155Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:50:26.836798 containerd[1569]: time="2025-03-19T11:50:26.836615943Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:50:26.836983 containerd[1569]: time="2025-03-19T11:50:26.836623129Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:50:26.836983 containerd[1569]: time="2025-03-19T11:50:26.836628525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.836983 containerd[1569]: time="2025-03-19T11:50:26.836637045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:50:26.836983 containerd[1569]: time="2025-03-19T11:50:26.836643462Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:50:26.836983 containerd[1569]: time="2025-03-19T11:50:26.836649100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:50:26.837217 containerd[1569]: time="2025-03-19T11:50:26.837160831Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:50:26.837432 containerd[1569]: time="2025-03-19T11:50:26.837191932Z" level=info msg="Connect containerd service" Mar 19 11:50:26.837432 containerd[1569]: time="2025-03-19T11:50:26.837335376Z" level=info msg="using legacy CRI server" Mar 19 11:50:26.837432 containerd[1569]: time="2025-03-19T11:50:26.837339712Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:50:26.837516 containerd[1569]: time="2025-03-19T11:50:26.837491115Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:50:26.837934 containerd[1569]: time="2025-03-19T11:50:26.837923896Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:50:26.838075 containerd[1569]: time="2025-03-19T11:50:26.838052651Z" level=info msg="Start subscribing containerd event" Mar 19 11:50:26.838100 containerd[1569]: time="2025-03-19T11:50:26.838081747Z" level=info msg="Start recovering state" Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838114313Z" level=info msg="Start event monitor" Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838131539Z" level=info msg="Start snapshots syncer" Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838141936Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838146464Z" level=info msg="Start streaming server" Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838174764Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838200219Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:50:26.838445 containerd[1569]: time="2025-03-19T11:50:26.838229472Z" level=info msg="containerd successfully booted in 0.026674s" Mar 19 11:50:26.838283 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:50:26.929783 tar[1555]: linux-amd64/LICENSE Mar 19 11:50:26.929919 tar[1555]: linux-amd64/README.md Mar 19 11:50:26.937082 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:50:28.089934 systemd-networkd[1453]: ens192: Gained IPv6LL Mar 19 11:50:28.094690 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:50:28.095157 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:50:28.099978 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Mar 19 11:50:28.101540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:28.102928 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:50:28.121000 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:50:28.132906 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 19 11:50:28.133116 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Mar 19 11:50:28.134089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:50:28.989867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:28.990227 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:50:28.991180 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:28.991691 systemd[1]: Startup finished in 978ms (kernel) + 6.876s (initrd) + 4.816s (userspace) = 12.670s. Mar 19 11:50:29.019664 login[1671]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 19 11:50:29.021266 login[1672]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 19 11:50:29.030557 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:50:29.038288 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:50:29.041502 systemd-logind[1539]: New session 2 of user core. Mar 19 11:50:29.045465 systemd-logind[1539]: New session 1 of user core. Mar 19 11:50:29.049371 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:50:29.056977 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:50:29.058555 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:50:29.060019 systemd-logind[1539]: New session c1 of user core. Mar 19 11:50:29.145094 systemd[1716]: Queued start job for default target default.target. Mar 19 11:50:29.150736 systemd[1716]: Created slice app.slice - User Application Slice. Mar 19 11:50:29.150766 systemd[1716]: Reached target paths.target - Paths. Mar 19 11:50:29.150796 systemd[1716]: Reached target timers.target - Timers. Mar 19 11:50:29.154198 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:50:29.157899 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:50:29.158341 systemd[1716]: Reached target sockets.target - Sockets. Mar 19 11:50:29.158374 systemd[1716]: Reached target basic.target - Basic System. Mar 19 11:50:29.158396 systemd[1716]: Reached target default.target - Main User Target. Mar 19 11:50:29.158412 systemd[1716]: Startup finished in 94ms. Mar 19 11:50:29.159319 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:50:29.160378 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:50:29.160969 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:50:29.478103 kubelet[1709]: E0319 11:50:29.478069 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:29.479649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:29.479739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:29.479947 systemd[1]: kubelet.service: Consumed 601ms CPU time, 246.1M memory peak. Mar 19 11:50:39.611916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:50:39.616916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:39.852870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:39.855474 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:39.904387 kubelet[1759]: E0319 11:50:39.904319 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:39.906612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:39.906696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:39.907002 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.2M memory peak. Mar 19 11:50:50.112075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:50:50.116931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:50:50.433994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:50:50.436894 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:50:50.472824 kubelet[1774]: E0319 11:50:50.472794 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:50:50.474382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:50:50.474519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:50:50.475050 systemd[1]: kubelet.service: Consumed 87ms CPU time, 98.1M memory peak. Mar 19 11:50:56.644314 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:50:56.654006 systemd[1]: Started sshd@0-139.178.70.109:22-147.75.109.163:39362.service - OpenSSH per-connection server daemon (147.75.109.163:39362). Mar 19 11:50:56.692971 sshd[1783]: Accepted publickey for core from 147.75.109.163 port 39362 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:56.693686 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:56.697121 systemd-logind[1539]: New session 3 of user core. Mar 19 11:50:56.706930 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:50:56.766897 systemd[1]: Started sshd@1-139.178.70.109:22-147.75.109.163:39370.service - OpenSSH per-connection server daemon (147.75.109.163:39370). Mar 19 11:50:56.796948 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 39370 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:56.797596 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:56.800450 systemd-logind[1539]: New session 4 of user core. Mar 19 11:50:56.805864 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:50:56.855895 sshd[1790]: Connection closed by 147.75.109.163 port 39370 Mar 19 11:50:56.855624 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Mar 19 11:50:56.869440 systemd[1]: sshd@1-139.178.70.109:22-147.75.109.163:39370.service: Deactivated successfully. Mar 19 11:50:56.870447 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:50:56.871449 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:50:56.872382 systemd[1]: Started sshd@2-139.178.70.109:22-147.75.109.163:39386.service - OpenSSH per-connection server daemon (147.75.109.163:39386). Mar 19 11:50:56.873960 systemd-logind[1539]: Removed session 4. Mar 19 11:50:56.909408 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 39386 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:56.910206 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:56.913916 systemd-logind[1539]: New session 5 of user core. Mar 19 11:50:56.923942 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:50:56.970779 sshd[1798]: Connection closed by 147.75.109.163 port 39386 Mar 19 11:50:56.971151 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Mar 19 11:50:56.981323 systemd[1]: sshd@2-139.178.70.109:22-147.75.109.163:39386.service: Deactivated successfully. Mar 19 11:50:56.982421 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:50:56.983051 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:50:56.984277 systemd[1]: Started sshd@3-139.178.70.109:22-147.75.109.163:39402.service - OpenSSH per-connection server daemon (147.75.109.163:39402). Mar 19 11:50:56.986202 systemd-logind[1539]: Removed session 5. Mar 19 11:50:57.027280 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 39402 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:57.028017 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:57.032575 systemd-logind[1539]: New session 6 of user core. Mar 19 11:50:57.037939 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:50:57.087068 sshd[1806]: Connection closed by 147.75.109.163 port 39402 Mar 19 11:50:57.087431 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Mar 19 11:50:57.098267 systemd[1]: sshd@3-139.178.70.109:22-147.75.109.163:39402.service: Deactivated successfully. Mar 19 11:50:57.099253 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:50:57.099798 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:50:57.103980 systemd[1]: Started sshd@4-139.178.70.109:22-147.75.109.163:39404.service - OpenSSH per-connection server daemon (147.75.109.163:39404). Mar 19 11:50:57.106060 systemd-logind[1539]: Removed session 6. Mar 19 11:50:57.138159 sshd[1811]: Accepted publickey for core from 147.75.109.163 port 39404 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:57.138956 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:57.142208 systemd-logind[1539]: New session 7 of user core. Mar 19 11:50:57.150856 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:50:57.208080 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:50:57.208304 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:50:57.221487 sudo[1815]: pam_unix(sudo:session): session closed for user root Mar 19 11:50:57.222315 sshd[1814]: Connection closed by 147.75.109.163 port 39404 Mar 19 11:50:57.223250 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Mar 19 11:50:57.229402 systemd[1]: sshd@4-139.178.70.109:22-147.75.109.163:39404.service: Deactivated successfully. Mar 19 11:50:57.230931 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:50:57.231953 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:50:57.235981 systemd[1]: Started sshd@5-139.178.70.109:22-147.75.109.163:39412.service - OpenSSH per-connection server daemon (147.75.109.163:39412). Mar 19 11:50:57.236980 systemd-logind[1539]: Removed session 7. Mar 19 11:50:57.269362 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 39412 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:57.269998 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:57.272363 systemd-logind[1539]: New session 8 of user core. Mar 19 11:50:57.279866 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:50:57.327238 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:50:57.327527 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:50:57.329133 sudo[1825]: pam_unix(sudo:session): session closed for user root Mar 19 11:50:57.331852 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:50:57.331992 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:50:57.339913 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:50:57.354196 augenrules[1847]: No rules Mar 19 11:50:57.354469 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:50:57.354591 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:50:57.355261 sudo[1824]: pam_unix(sudo:session): session closed for user root Mar 19 11:50:57.355924 sshd[1823]: Connection closed by 147.75.109.163 port 39412 Mar 19 11:50:57.356575 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Mar 19 11:50:57.361983 systemd[1]: sshd@5-139.178.70.109:22-147.75.109.163:39412.service: Deactivated successfully. Mar 19 11:50:57.362940 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:50:57.363470 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:50:57.364829 systemd-logind[1539]: Removed session 8. Mar 19 11:50:57.365384 systemd[1]: Started sshd@6-139.178.70.109:22-147.75.109.163:39428.service - OpenSSH per-connection server daemon (147.75.109.163:39428). Mar 19 11:50:57.397228 sshd[1855]: Accepted publickey for core from 147.75.109.163 port 39428 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:50:57.397819 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:50:57.400087 systemd-logind[1539]: New session 9 of user core. Mar 19 11:50:57.407833 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:50:57.455123 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:50:57.455422 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:50:57.742086 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:50:57.742100 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:50:58.002474 dockerd[1877]: time="2025-03-19T11:50:58.002406720Z" level=info msg="Starting up" Mar 19 11:50:58.048563 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2566943418-merged.mount: Deactivated successfully. Mar 19 11:50:58.063771 dockerd[1877]: time="2025-03-19T11:50:58.063715235Z" level=info msg="Loading containers: start." Mar 19 11:50:58.151770 kernel: Initializing XFRM netlink socket Mar 19 11:50:58.199015 systemd-networkd[1453]: docker0: Link UP Mar 19 11:50:58.217729 dockerd[1877]: time="2025-03-19T11:50:58.217462577Z" level=info msg="Loading containers: done." Mar 19 11:50:58.224036 dockerd[1877]: time="2025-03-19T11:50:58.224021445Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:50:58.224142 dockerd[1877]: time="2025-03-19T11:50:58.224132246Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:50:58.224226 dockerd[1877]: time="2025-03-19T11:50:58.224216999Z" level=info msg="Daemon has completed initialization" Mar 19 11:50:58.239947 dockerd[1877]: time="2025-03-19T11:50:58.239780866Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:50:58.239974 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:50:59.149502 containerd[1569]: time="2025-03-19T11:50:59.149475328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 19 11:50:59.667499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800243539.mount: Deactivated successfully. Mar 19 11:51:00.612009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:51:00.617875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:00.709637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:00.711790 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:51:00.717437 containerd[1569]: time="2025-03-19T11:51:00.717324179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:00.730373 containerd[1569]: time="2025-03-19T11:51:00.730332689Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 19 11:51:00.734737 containerd[1569]: time="2025-03-19T11:51:00.734717783Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:00.753850 kubelet[2133]: E0319 11:51:00.753784 2133 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:51:00.755383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:51:00.755501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:51:00.755840 systemd[1]: kubelet.service: Consumed 78ms CPU time, 95.4M memory peak. Mar 19 11:51:00.926819 containerd[1569]: time="2025-03-19T11:51:00.926673690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:00.927589 containerd[1569]: time="2025-03-19T11:51:00.927229665Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 1.777729433s" Mar 19 11:51:00.927589 containerd[1569]: time="2025-03-19T11:51:00.927252375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 19 11:51:00.943533 containerd[1569]: time="2025-03-19T11:51:00.943486028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 19 11:51:02.328782 containerd[1569]: time="2025-03-19T11:51:02.328340711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:02.333243 containerd[1569]: time="2025-03-19T11:51:02.333219641Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 19 11:51:02.340482 containerd[1569]: time="2025-03-19T11:51:02.340455276Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:02.347835 containerd[1569]: time="2025-03-19T11:51:02.347799184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:02.348531 containerd[1569]: time="2025-03-19T11:51:02.348508229Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.404998414s" Mar 19 11:51:02.348573 containerd[1569]: time="2025-03-19T11:51:02.348530086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 19 11:51:02.364847 containerd[1569]: time="2025-03-19T11:51:02.364819913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 19 11:51:03.328360 containerd[1569]: time="2025-03-19T11:51:03.328159970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:03.328561 containerd[1569]: time="2025-03-19T11:51:03.328539230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 19 11:51:03.329191 containerd[1569]: time="2025-03-19T11:51:03.328687475Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:03.330238 containerd[1569]: time="2025-03-19T11:51:03.330217428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:03.330918 containerd[1569]: time="2025-03-19T11:51:03.330858705Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 966.015719ms" Mar 19 11:51:03.330918 containerd[1569]: time="2025-03-19T11:51:03.330873001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 19 11:51:03.343417 containerd[1569]: time="2025-03-19T11:51:03.343400415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 19 11:51:04.130852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598485502.mount: Deactivated successfully. Mar 19 11:51:04.590275 containerd[1569]: time="2025-03-19T11:51:04.590210364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:04.590860 containerd[1569]: time="2025-03-19T11:51:04.590845208Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 19 11:51:04.591252 containerd[1569]: time="2025-03-19T11:51:04.591233133Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:04.592188 containerd[1569]: time="2025-03-19T11:51:04.592157570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:04.592670 containerd[1569]: time="2025-03-19T11:51:04.592544660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.249105357s" Mar 19 11:51:04.592670 containerd[1569]: time="2025-03-19T11:51:04.592560200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 19 11:51:04.606531 containerd[1569]: time="2025-03-19T11:51:04.606514431Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:51:05.125110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528421994.mount: Deactivated successfully. Mar 19 11:51:05.901774 containerd[1569]: time="2025-03-19T11:51:05.901247503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:05.901774 containerd[1569]: time="2025-03-19T11:51:05.901557803Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 19 11:51:05.902311 containerd[1569]: time="2025-03-19T11:51:05.901861830Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:05.903450 containerd[1569]: time="2025-03-19T11:51:05.903437012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:05.904165 containerd[1569]: time="2025-03-19T11:51:05.904150161Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.297496115s" Mar 19 11:51:05.904198 containerd[1569]: time="2025-03-19T11:51:05.904168242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 19 11:51:05.916041 containerd[1569]: time="2025-03-19T11:51:05.916023381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 19 11:51:06.451963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898683776.mount: Deactivated successfully. Mar 19 11:51:06.453674 containerd[1569]: time="2025-03-19T11:51:06.453649313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:06.454080 containerd[1569]: time="2025-03-19T11:51:06.454052589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 19 11:51:06.454661 containerd[1569]: time="2025-03-19T11:51:06.454097349Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:06.455269 containerd[1569]: time="2025-03-19T11:51:06.455246374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:06.455891 containerd[1569]: time="2025-03-19T11:51:06.455665350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 539.549285ms" Mar 19 11:51:06.455891 containerd[1569]: time="2025-03-19T11:51:06.455682525Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 19 11:51:06.468048 containerd[1569]: time="2025-03-19T11:51:06.468023740Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 19 11:51:06.942496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172676697.mount: Deactivated successfully. Mar 19 11:51:09.300474 containerd[1569]: time="2025-03-19T11:51:09.300436046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:09.303650 containerd[1569]: time="2025-03-19T11:51:09.303620454Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 19 11:51:09.304247 containerd[1569]: time="2025-03-19T11:51:09.304227182Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:09.306189 containerd[1569]: time="2025-03-19T11:51:09.306154150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:09.307270 containerd[1569]: time="2025-03-19T11:51:09.307029835Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.838986621s" Mar 19 11:51:09.307270 containerd[1569]: time="2025-03-19T11:51:09.307053284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 19 11:51:10.861736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 19 11:51:10.870495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:11.204635 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:51:11.204840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:11.256166 kubelet[2341]: E0319 11:51:11.256140 2341 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:51:11.257270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:51:11.257351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:51:11.257688 systemd[1]: kubelet.service: Consumed 70ms CPU time, 96.5M memory peak. Mar 19 11:51:11.333809 update_engine[1548]: I20250319 11:51:11.333774 1548 update_attempter.cc:509] Updating boot flags... Mar 19 11:51:11.359777 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2360) Mar 19 11:51:11.527266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:11.527642 systemd[1]: kubelet.service: Consumed 70ms CPU time, 96.5M memory peak. Mar 19 11:51:11.541030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:11.555654 systemd[1]: Reload requested from client PID 2372 ('systemctl') (unit session-9.scope)... Mar 19 11:51:11.555737 systemd[1]: Reloading... Mar 19 11:51:11.611795 zram_generator::config[2416]: No configuration found. Mar 19 11:51:11.670741 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 19 11:51:11.688750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:51:11.752142 systemd[1]: Reloading finished in 196 ms. Mar 19 11:51:11.784140 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 19 11:51:11.784193 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 19 11:51:11.784337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:11.790983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:12.058706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:12.065952 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:51:12.109391 kubelet[2484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:12.109391 kubelet[2484]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:51:12.109391 kubelet[2484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:12.114566 kubelet[2484]: I0319 11:51:12.114511 2484 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:51:12.452560 kubelet[2484]: I0319 11:51:12.452541 2484 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 11:51:12.452560 kubelet[2484]: I0319 11:51:12.452555 2484 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:51:12.452967 kubelet[2484]: I0319 11:51:12.452672 2484 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 11:51:12.613691 kubelet[2484]: I0319 11:51:12.613671 2484 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:51:12.623073 kubelet[2484]: E0319 11:51:12.623003 2484 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.634364 kubelet[2484]: I0319 11:51:12.634350 2484 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:51:12.635788 kubelet[2484]: I0319 11:51:12.635599 2484 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:51:12.637300 kubelet[2484]: I0319 11:51:12.635631 2484 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 11:51:12.638008 kubelet[2484]: I0319 11:51:12.637852 2484 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:51:12.638008 kubelet[2484]: I0319 11:51:12.637870 2484 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 11:51:12.639610 kubelet[2484]: I0319 11:51:12.639503 2484 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:12.640664 kubelet[2484]: W0319 11:51:12.640620 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.640709 kubelet[2484]: E0319 11:51:12.640668 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.642013 kubelet[2484]: I0319 11:51:12.641924 2484 kubelet.go:400] "Attempting to sync node with API server" Mar 19 11:51:12.642013 kubelet[2484]: I0319 11:51:12.641942 2484 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:51:12.642548 kubelet[2484]: I0319 11:51:12.642442 2484 kubelet.go:312] "Adding apiserver pod source" Mar 19 11:51:12.642548 kubelet[2484]: I0319 11:51:12.642460 2484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:51:12.645616 kubelet[2484]: W0319 11:51:12.645567 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.645616 kubelet[2484]: E0319 11:51:12.645599 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.646289 kubelet[2484]: I0319 11:51:12.646252 2484 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:51:12.647864 kubelet[2484]: I0319 11:51:12.647765 2484 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:51:12.649692 kubelet[2484]: W0319 11:51:12.649450 2484 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:51:12.651068 kubelet[2484]: I0319 11:51:12.650861 2484 server.go:1264] "Started kubelet" Mar 19 11:51:12.652813 kubelet[2484]: I0319 11:51:12.652573 2484 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:51:12.654358 kubelet[2484]: I0319 11:51:12.654248 2484 server.go:455] "Adding debug handlers to kubelet server" Mar 19 11:51:12.655527 kubelet[2484]: I0319 11:51:12.655220 2484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:51:12.655527 kubelet[2484]: I0319 11:51:12.655353 2484 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:51:12.655527 kubelet[2484]: E0319 11:51:12.655421 2484 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e31fdf54c08e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:51:12.650848485 +0000 UTC m=+0.582816423,LastTimestamp:2025-03-19 11:51:12.650848485 +0000 UTC m=+0.582816423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:51:12.656683 kubelet[2484]: I0319 11:51:12.656255 2484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:51:12.661346 kubelet[2484]: I0319 11:51:12.661094 2484 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 11:51:12.662216 kubelet[2484]: E0319 11:51:12.662200 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" Mar 19 11:51:12.662405 kubelet[2484]: I0319 11:51:12.662397 2484 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:51:12.662502 kubelet[2484]: I0319 11:51:12.662485 2484 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:51:12.663064 kubelet[2484]: E0319 11:51:12.663056 2484 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:51:12.663615 kubelet[2484]: I0319 11:51:12.663532 2484 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:51:12.663676 kubelet[2484]: I0319 11:51:12.663555 2484 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:51:12.663817 kubelet[2484]: I0319 11:51:12.663809 2484 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:51:12.668004 kubelet[2484]: I0319 11:51:12.667982 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:51:12.668519 kubelet[2484]: I0319 11:51:12.668506 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:51:12.668543 kubelet[2484]: I0319 11:51:12.668521 2484 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:51:12.668543 kubelet[2484]: I0319 11:51:12.668529 2484 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 11:51:12.668573 kubelet[2484]: E0319 11:51:12.668547 2484 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:51:12.672671 kubelet[2484]: W0319 11:51:12.672648 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.672706 kubelet[2484]: E0319 11:51:12.672684 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.672850 kubelet[2484]: W0319 11:51:12.672831 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.672877 kubelet[2484]: E0319 11:51:12.672854 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:12.685945 kubelet[2484]: I0319 11:51:12.685937 2484 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:51:12.686147 kubelet[2484]: I0319 11:51:12.686004 2484 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:51:12.686147 kubelet[2484]: I0319 11:51:12.686027 2484 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:12.689256 kubelet[2484]: I0319 11:51:12.689220 2484 policy_none.go:49] "None policy: Start" Mar 19 11:51:12.689652 kubelet[2484]: I0319 11:51:12.689492 2484 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:51:12.689652 kubelet[2484]: I0319 11:51:12.689508 2484 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:51:12.694350 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:51:12.706063 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:51:12.708783 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:51:12.715249 kubelet[2484]: I0319 11:51:12.715240 2484 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:51:12.715569 kubelet[2484]: I0319 11:51:12.715372 2484 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:51:12.715569 kubelet[2484]: I0319 11:51:12.715428 2484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:51:12.716575 kubelet[2484]: E0319 11:51:12.716531 2484 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 19 11:51:12.762311 kubelet[2484]: I0319 11:51:12.762264 2484 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:51:12.762594 kubelet[2484]: E0319 11:51:12.762558 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Mar 19 11:51:12.768824 kubelet[2484]: I0319 11:51:12.768789 2484 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 19 11:51:12.769427 kubelet[2484]: I0319 11:51:12.769405 2484 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 19 11:51:12.770854 kubelet[2484]: I0319 11:51:12.770187 2484 topology_manager.go:215] "Topology Admit Handler" podUID="8e210b3f8898aea15295d68783957283" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 19 11:51:12.778067 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 19 11:51:12.800575 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 19 11:51:12.804145 systemd[1]: Created slice kubepods-burstable-pod8e210b3f8898aea15295d68783957283.slice - libcontainer container kubepods-burstable-pod8e210b3f8898aea15295d68783957283.slice. Mar 19 11:51:12.863173 kubelet[2484]: E0319 11:51:12.863136 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" Mar 19 11:51:12.865710 kubelet[2484]: I0319 11:51:12.865677 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:12.964439 kubelet[2484]: I0319 11:51:12.964322 2484 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:51:12.964640 kubelet[2484]: E0319 11:51:12.964564 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Mar 19 11:51:12.965916 kubelet[2484]: I0319 11:51:12.965816 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:12.965916 kubelet[2484]: I0319 11:51:12.965840 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:12.965916 kubelet[2484]: I0319 11:51:12.965856 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:51:12.965916 kubelet[2484]: I0319 11:51:12.965868 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e210b3f8898aea15295d68783957283-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e210b3f8898aea15295d68783957283\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:12.965916 kubelet[2484]: I0319 11:51:12.965909 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:12.966090 kubelet[2484]: I0319 11:51:12.965925 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:12.966090 kubelet[2484]: I0319 11:51:12.965936 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e210b3f8898aea15295d68783957283-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e210b3f8898aea15295d68783957283\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:12.966090 kubelet[2484]: I0319 11:51:12.965948 2484 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e210b3f8898aea15295d68783957283-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e210b3f8898aea15295d68783957283\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:13.099104 containerd[1569]: time="2025-03-19T11:51:13.099071927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:13.103580 containerd[1569]: time="2025-03-19T11:51:13.103552262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:13.106149 containerd[1569]: time="2025-03-19T11:51:13.106119611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e210b3f8898aea15295d68783957283,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:13.263481 kubelet[2484]: E0319 11:51:13.263415 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" Mar 19 11:51:13.365939 kubelet[2484]: I0319 11:51:13.365882 2484 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:51:13.366127 kubelet[2484]: E0319 11:51:13.366097 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Mar 19 11:51:13.415725 kubelet[2484]: E0319 11:51:13.415653 2484 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e31fdf54c08e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:51:12.650848485 +0000 UTC m=+0.582816423,LastTimestamp:2025-03-19 11:51:12.650848485 +0000 UTC m=+0.582816423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:51:13.594197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871938911.mount: Deactivated successfully. Mar 19 11:51:13.596443 containerd[1569]: time="2025-03-19T11:51:13.596402295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:13.596989 containerd[1569]: time="2025-03-19T11:51:13.596967680Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:13.597392 containerd[1569]: time="2025-03-19T11:51:13.597365873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:51:13.597718 containerd[1569]: time="2025-03-19T11:51:13.597675498Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:13.598018 containerd[1569]: time="2025-03-19T11:51:13.597987143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 19 11:51:13.598898 containerd[1569]: time="2025-03-19T11:51:13.598782662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:51:13.600632 containerd[1569]: time="2025-03-19T11:51:13.600618080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.881587ms" Mar 19 11:51:13.605763 containerd[1569]: time="2025-03-19T11:51:13.604341856Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.282022ms" Mar 19 11:51:13.605763 containerd[1569]: time="2025-03-19T11:51:13.604643671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:13.605763 containerd[1569]: time="2025-03-19T11:51:13.605026699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:51:13.606224 containerd[1569]: time="2025-03-19T11:51:13.606212716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.049172ms" Mar 19 11:51:13.747907 kubelet[2484]: W0319 11:51:13.747865 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:13.748062 kubelet[2484]: E0319 11:51:13.748051 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:13.809036 containerd[1569]: time="2025-03-19T11:51:13.808927251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:13.809036 containerd[1569]: time="2025-03-19T11:51:13.808960562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:13.809036 containerd[1569]: time="2025-03-19T11:51:13.809014289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:13.809311 containerd[1569]: time="2025-03-19T11:51:13.809077378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:13.812047 containerd[1569]: time="2025-03-19T11:51:13.811793714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:13.812047 containerd[1569]: time="2025-03-19T11:51:13.811820535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:13.812308 containerd[1569]: time="2025-03-19T11:51:13.811830270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:13.812308 containerd[1569]: time="2025-03-19T11:51:13.812235698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:13.813213 containerd[1569]: time="2025-03-19T11:51:13.809446351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:13.813213 containerd[1569]: time="2025-03-19T11:51:13.812797941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:13.813213 containerd[1569]: time="2025-03-19T11:51:13.812806893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:13.814031 containerd[1569]: time="2025-03-19T11:51:13.814013550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:13.843164 systemd[1]: Started cri-containerd-79724a48181c4538135dab18d664dff91a07cd2d344e42a730343c910241a716.scope - libcontainer container 79724a48181c4538135dab18d664dff91a07cd2d344e42a730343c910241a716. Mar 19 11:51:13.846595 systemd[1]: Started cri-containerd-13586c87fa1e3cfbba2d4bdbea3ad815f2d4a7aabe8d80d882476cafbe01af9b.scope - libcontainer container 13586c87fa1e3cfbba2d4bdbea3ad815f2d4a7aabe8d80d882476cafbe01af9b. Mar 19 11:51:13.848288 systemd[1]: Started cri-containerd-fca8949e66d2fe8c816a441cd3b122f5f2e9013e409e2332b1f2f7367e9b8f40.scope - libcontainer container fca8949e66d2fe8c816a441cd3b122f5f2e9013e409e2332b1f2f7367e9b8f40. Mar 19 11:51:13.884584 containerd[1569]: time="2025-03-19T11:51:13.884565781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca8949e66d2fe8c816a441cd3b122f5f2e9013e409e2332b1f2f7367e9b8f40\"" Mar 19 11:51:13.886070 containerd[1569]: time="2025-03-19T11:51:13.885979228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e210b3f8898aea15295d68783957283,Namespace:kube-system,Attempt:0,} returns sandbox id \"79724a48181c4538135dab18d664dff91a07cd2d344e42a730343c910241a716\"" Mar 19 11:51:13.887632 containerd[1569]: time="2025-03-19T11:51:13.887604197Z" level=info msg="CreateContainer within sandbox \"79724a48181c4538135dab18d664dff91a07cd2d344e42a730343c910241a716\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:51:13.887685 containerd[1569]: time="2025-03-19T11:51:13.887675200Z" level=info msg="CreateContainer within sandbox \"fca8949e66d2fe8c816a441cd3b122f5f2e9013e409e2332b1f2f7367e9b8f40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:51:13.898542 containerd[1569]: time="2025-03-19T11:51:13.898512747Z" level=info msg="CreateContainer within sandbox \"fca8949e66d2fe8c816a441cd3b122f5f2e9013e409e2332b1f2f7367e9b8f40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ddc99b8da4d8ee983d23fb9eff1b796150a9671abf465f30d6b7eb1a22b4104\"" Mar 19 11:51:13.899780 containerd[1569]: time="2025-03-19T11:51:13.899514963Z" level=info msg="StartContainer for \"7ddc99b8da4d8ee983d23fb9eff1b796150a9671abf465f30d6b7eb1a22b4104\"" Mar 19 11:51:13.900196 containerd[1569]: time="2025-03-19T11:51:13.900183997Z" level=info msg="CreateContainer within sandbox \"79724a48181c4538135dab18d664dff91a07cd2d344e42a730343c910241a716\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14a71247c36a7be96a47a37664a3f4f03cd33b16f2dfa4fa4fe3269f305d0ad7\"" Mar 19 11:51:13.900377 containerd[1569]: time="2025-03-19T11:51:13.900367566Z" level=info msg="StartContainer for \"14a71247c36a7be96a47a37664a3f4f03cd33b16f2dfa4fa4fe3269f305d0ad7\"" Mar 19 11:51:13.904005 containerd[1569]: time="2025-03-19T11:51:13.903982672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"13586c87fa1e3cfbba2d4bdbea3ad815f2d4a7aabe8d80d882476cafbe01af9b\"" Mar 19 11:51:13.905583 containerd[1569]: time="2025-03-19T11:51:13.905521892Z" level=info msg="CreateContainer within sandbox \"13586c87fa1e3cfbba2d4bdbea3ad815f2d4a7aabe8d80d882476cafbe01af9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:51:13.912331 containerd[1569]: time="2025-03-19T11:51:13.912315602Z" level=info msg="CreateContainer within sandbox \"13586c87fa1e3cfbba2d4bdbea3ad815f2d4a7aabe8d80d882476cafbe01af9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf9c6d561f8c33f7f66962834e72aac858bfb6a141c75d977e38c8aa82dd7d7c\"" Mar 19 11:51:13.913340 containerd[1569]: time="2025-03-19T11:51:13.912817997Z" level=info msg="StartContainer for \"cf9c6d561f8c33f7f66962834e72aac858bfb6a141c75d977e38c8aa82dd7d7c\"" Mar 19 11:51:13.918155 kubelet[2484]: W0319 11:51:13.918121 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:13.918205 kubelet[2484]: E0319 11:51:13.918159 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:13.926916 systemd[1]: Started cri-containerd-14a71247c36a7be96a47a37664a3f4f03cd33b16f2dfa4fa4fe3269f305d0ad7.scope - libcontainer container 14a71247c36a7be96a47a37664a3f4f03cd33b16f2dfa4fa4fe3269f305d0ad7. Mar 19 11:51:13.928195 systemd[1]: Started cri-containerd-7ddc99b8da4d8ee983d23fb9eff1b796150a9671abf465f30d6b7eb1a22b4104.scope - libcontainer container 7ddc99b8da4d8ee983d23fb9eff1b796150a9671abf465f30d6b7eb1a22b4104. Mar 19 11:51:13.934103 systemd[1]: Started cri-containerd-cf9c6d561f8c33f7f66962834e72aac858bfb6a141c75d977e38c8aa82dd7d7c.scope - libcontainer container cf9c6d561f8c33f7f66962834e72aac858bfb6a141c75d977e38c8aa82dd7d7c. Mar 19 11:51:13.945162 kubelet[2484]: W0319 11:51:13.945127 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:13.945219 kubelet[2484]: E0319 11:51:13.945165 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:13.965933 containerd[1569]: time="2025-03-19T11:51:13.965847394Z" level=info msg="StartContainer for \"7ddc99b8da4d8ee983d23fb9eff1b796150a9671abf465f30d6b7eb1a22b4104\" returns successfully" Mar 19 11:51:13.972748 containerd[1569]: time="2025-03-19T11:51:13.972733482Z" level=info msg="StartContainer for \"14a71247c36a7be96a47a37664a3f4f03cd33b16f2dfa4fa4fe3269f305d0ad7\" returns successfully" Mar 19 11:51:13.982921 containerd[1569]: time="2025-03-19T11:51:13.982877772Z" level=info msg="StartContainer for \"cf9c6d561f8c33f7f66962834e72aac858bfb6a141c75d977e38c8aa82dd7d7c\" returns successfully" Mar 19 11:51:14.064993 kubelet[2484]: E0319 11:51:14.064963 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" Mar 19 11:51:14.168466 kubelet[2484]: I0319 11:51:14.168450 2484 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:51:14.168789 kubelet[2484]: E0319 11:51:14.168775 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Mar 19 11:51:14.263773 kubelet[2484]: W0319 11:51:14.263446 2484 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:14.263773 kubelet[2484]: E0319 11:51:14.263512 2484 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Mar 19 11:51:15.619333 kubelet[2484]: E0319 11:51:15.619303 2484 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 19 11:51:15.666858 kubelet[2484]: E0319 11:51:15.666817 2484 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 19 11:51:15.769868 kubelet[2484]: I0319 11:51:15.769841 2484 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:51:15.775547 kubelet[2484]: I0319 11:51:15.775516 2484 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 19 11:51:16.647972 kubelet[2484]: I0319 11:51:16.647945 2484 apiserver.go:52] "Watching apiserver" Mar 19 11:51:16.664477 kubelet[2484]: I0319 11:51:16.664436 2484 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:51:17.077569 systemd[1]: Reload requested from client PID 2757 ('systemctl') (unit session-9.scope)... Mar 19 11:51:17.077583 systemd[1]: Reloading... Mar 19 11:51:17.125791 zram_generator::config[2808]: No configuration found. Mar 19 11:51:17.178635 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 19 11:51:17.196379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:51:17.268352 systemd[1]: Reloading finished in 190 ms. Mar 19 11:51:17.284353 kubelet[2484]: E0319 11:51:17.284050 2484 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.182e31fdf54c08e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:51:12.650848485 +0000 UTC m=+0.582816423,LastTimestamp:2025-03-19 11:51:12.650848485 +0000 UTC m=+0.582816423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:51:17.284353 kubelet[2484]: I0319 11:51:17.284301 2484 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:51:17.284170 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:17.289979 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:51:17.290107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:17.290133 systemd[1]: kubelet.service: Consumed 566ms CPU time, 113.9M memory peak. Mar 19 11:51:17.296086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:51:17.421690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:51:17.424336 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:51:17.530779 kubelet[2869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:17.530779 kubelet[2869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:51:17.530779 kubelet[2869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:51:17.530998 kubelet[2869]: I0319 11:51:17.530775 2869 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:51:17.534784 kubelet[2869]: I0319 11:51:17.533556 2869 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 19 11:51:17.534784 kubelet[2869]: I0319 11:51:17.533566 2869 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:51:17.534784 kubelet[2869]: I0319 11:51:17.533653 2869 server.go:927] "Client rotation is on, will bootstrap in background" Mar 19 11:51:17.534784 kubelet[2869]: I0319 11:51:17.534320 2869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:51:17.535329 kubelet[2869]: I0319 11:51:17.535317 2869 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:51:17.541824 kubelet[2869]: I0319 11:51:17.541811 2869 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:51:17.541960 kubelet[2869]: I0319 11:51:17.541940 2869 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:51:17.542056 kubelet[2869]: I0319 11:51:17.541962 2869 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 19 11:51:17.542118 kubelet[2869]: I0319 11:51:17.542063 2869 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:51:17.542118 kubelet[2869]: I0319 11:51:17.542070 2869 container_manager_linux.go:301] "Creating device plugin manager" Mar 19 11:51:17.542118 kubelet[2869]: I0319 11:51:17.542097 2869 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:17.542174 kubelet[2869]: I0319 11:51:17.542145 2869 kubelet.go:400] "Attempting to sync node with API server" Mar 19 11:51:17.542174 kubelet[2869]: I0319 11:51:17.542155 2869 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:51:17.542174 kubelet[2869]: I0319 11:51:17.542167 2869 kubelet.go:312] "Adding apiserver pod source" Mar 19 11:51:17.542858 kubelet[2869]: I0319 11:51:17.542175 2869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:51:17.543010 kubelet[2869]: I0319 11:51:17.543002 2869 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:51:17.544208 kubelet[2869]: I0319 11:51:17.543129 2869 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:51:17.544208 kubelet[2869]: I0319 11:51:17.543324 2869 server.go:1264] "Started kubelet" Mar 19 11:51:17.544208 kubelet[2869]: I0319 11:51:17.544148 2869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:51:17.553547 kubelet[2869]: I0319 11:51:17.553521 2869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:51:17.554733 kubelet[2869]: I0319 11:51:17.554714 2869 server.go:455] "Adding debug handlers to kubelet server" Mar 19 11:51:17.555935 kubelet[2869]: I0319 11:51:17.555601 2869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:51:17.555935 kubelet[2869]: I0319 11:51:17.555712 2869 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:51:17.557052 kubelet[2869]: I0319 11:51:17.557044 2869 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 19 11:51:17.559538 kubelet[2869]: I0319 11:51:17.559530 2869 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:51:17.559646 kubelet[2869]: I0319 11:51:17.559640 2869 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:51:17.561175 kubelet[2869]: I0319 11:51:17.561145 2869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:51:17.561796 kubelet[2869]: I0319 11:51:17.561788 2869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:51:17.561988 kubelet[2869]: I0319 11:51:17.561978 2869 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:51:17.562046 kubelet[2869]: I0319 11:51:17.562041 2869 kubelet.go:2337] "Starting kubelet main sync loop" Mar 19 11:51:17.562097 kubelet[2869]: I0319 11:51:17.562083 2869 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:51:17.562144 kubelet[2869]: E0319 11:51:17.562134 2869 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:51:17.563511 kubelet[2869]: I0319 11:51:17.563475 2869 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:51:17.563511 kubelet[2869]: I0319 11:51:17.563484 2869 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:51:17.602741 kubelet[2869]: I0319 11:51:17.602727 2869 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:51:17.602907 kubelet[2869]: I0319 11:51:17.602897 2869 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:51:17.602952 kubelet[2869]: I0319 11:51:17.602948 2869 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:51:17.603068 kubelet[2869]: I0319 11:51:17.603061 2869 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:51:17.603117 kubelet[2869]: I0319 11:51:17.603104 2869 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:51:17.603148 kubelet[2869]: I0319 11:51:17.603144 2869 policy_none.go:49] "None policy: Start" Mar 19 11:51:17.603459 kubelet[2869]: I0319 11:51:17.603452 2869 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:51:17.603498 kubelet[2869]: I0319 11:51:17.603494 2869 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:51:17.603595 kubelet[2869]: I0319 11:51:17.603589 2869 state_mem.go:75] "Updated machine memory state" Mar 19 11:51:17.606169 kubelet[2869]: I0319 11:51:17.606159 2869 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:51:17.606303 kubelet[2869]: I0319 11:51:17.606287 2869 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:51:17.606380 kubelet[2869]: I0319 11:51:17.606374 2869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:51:17.658789 kubelet[2869]: I0319 11:51:17.658767 2869 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 19 11:51:17.663483 kubelet[2869]: I0319 11:51:17.662996 2869 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 19 11:51:17.663483 kubelet[2869]: I0319 11:51:17.663047 2869 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 19 11:51:17.663953 kubelet[2869]: I0319 11:51:17.663658 2869 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 19 11:51:17.663953 kubelet[2869]: I0319 11:51:17.663751 2869 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 19 11:51:17.663953 kubelet[2869]: I0319 11:51:17.663832 2869 topology_manager.go:215] "Topology Admit Handler" podUID="8e210b3f8898aea15295d68783957283" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 19 11:51:17.672744 kubelet[2869]: E0319 11:51:17.672657 2869 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:17.761037 kubelet[2869]: I0319 11:51:17.760998 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:17.761037 kubelet[2869]: I0319 11:51:17.761037 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e210b3f8898aea15295d68783957283-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e210b3f8898aea15295d68783957283\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:17.761242 kubelet[2869]: I0319 11:51:17.761056 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e210b3f8898aea15295d68783957283-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e210b3f8898aea15295d68783957283\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:17.761242 kubelet[2869]: I0319 11:51:17.761070 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:17.761242 kubelet[2869]: I0319 11:51:17.761083 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:17.761242 kubelet[2869]: I0319 11:51:17.761097 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:51:17.761242 kubelet[2869]: I0319 11:51:17.761108 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e210b3f8898aea15295d68783957283-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e210b3f8898aea15295d68783957283\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:17.761354 kubelet[2869]: I0319 11:51:17.761119 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:17.761354 kubelet[2869]: I0319 11:51:17.761132 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:51:18.090170 sudo[2900]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:51:18.090586 sudo[2900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:51:18.440868 sudo[2900]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:18.543118 kubelet[2869]: I0319 11:51:18.542999 2869 apiserver.go:52] "Watching apiserver" Mar 19 11:51:18.560592 kubelet[2869]: I0319 11:51:18.560571 2869 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:51:18.598994 kubelet[2869]: E0319 11:51:18.598778 2869 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 19 11:51:18.609410 kubelet[2869]: I0319 11:51:18.609379 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.609368221 podStartE2EDuration="1.609368221s" podCreationTimestamp="2025-03-19 11:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:18.609139138 +0000 UTC m=+1.113645273" watchObservedRunningTime="2025-03-19 11:51:18.609368221 +0000 UTC m=+1.113874349" Mar 19 11:51:18.609731 kubelet[2869]: I0319 11:51:18.609714 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.609708692 podStartE2EDuration="2.609708692s" podCreationTimestamp="2025-03-19 11:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:18.605318713 +0000 UTC m=+1.109824846" watchObservedRunningTime="2025-03-19 11:51:18.609708692 +0000 UTC m=+1.114214821" Mar 19 11:51:18.617400 kubelet[2869]: I0319 11:51:18.617295 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.617286057 podStartE2EDuration="1.617286057s" podCreationTimestamp="2025-03-19 11:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:18.612862276 +0000 UTC m=+1.117368401" watchObservedRunningTime="2025-03-19 11:51:18.617286057 +0000 UTC m=+1.121792186" Mar 19 11:51:19.467996 sudo[1859]: pam_unix(sudo:session): session closed for user root Mar 19 11:51:19.469048 sshd[1858]: Connection closed by 147.75.109.163 port 39428 Mar 19 11:51:19.469789 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Mar 19 11:51:19.472118 systemd[1]: sshd@6-139.178.70.109:22-147.75.109.163:39428.service: Deactivated successfully. Mar 19 11:51:19.473561 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:51:19.473737 systemd[1]: session-9.scope: Consumed 2.944s CPU time, 234.9M memory peak. Mar 19 11:51:19.474664 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:51:19.475406 systemd-logind[1539]: Removed session 9. Mar 19 11:51:31.405447 kubelet[2869]: I0319 11:51:31.405424 2869 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:51:31.405723 containerd[1569]: time="2025-03-19T11:51:31.405678465Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:51:31.405944 kubelet[2869]: I0319 11:51:31.405800 2869 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:51:32.264477 kubelet[2869]: I0319 11:51:32.264444 2869 topology_manager.go:215] "Topology Admit Handler" podUID="bfb39304-27f5-4c67-82b2-343a8f2f2c51" podNamespace="kube-system" podName="kube-proxy-w9l8v" Mar 19 11:51:32.265553 kubelet[2869]: I0319 11:51:32.265535 2869 topology_manager.go:215] "Topology Admit Handler" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" podNamespace="kube-system" podName="cilium-szzcf" Mar 19 11:51:32.274235 systemd[1]: Created slice kubepods-besteffort-podbfb39304_27f5_4c67_82b2_343a8f2f2c51.slice - libcontainer container kubepods-besteffort-podbfb39304_27f5_4c67_82b2_343a8f2f2c51.slice. Mar 19 11:51:32.283298 systemd[1]: Created slice kubepods-burstable-pod36de7565_c8c1_49aa_80d1_a3b015d66662.slice - libcontainer container kubepods-burstable-pod36de7565_c8c1_49aa_80d1_a3b015d66662.slice. Mar 19 11:51:32.348217 kubelet[2869]: I0319 11:51:32.348019 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb39304-27f5-4c67-82b2-343a8f2f2c51-xtables-lock\") pod \"kube-proxy-w9l8v\" (UID: \"bfb39304-27f5-4c67-82b2-343a8f2f2c51\") " pod="kube-system/kube-proxy-w9l8v" Mar 19 11:51:32.348217 kubelet[2869]: I0319 11:51:32.348055 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-hostproc\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348217 kubelet[2869]: I0319 11:51:32.348071 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bfb39304-27f5-4c67-82b2-343a8f2f2c51-kube-proxy\") pod \"kube-proxy-w9l8v\" (UID: \"bfb39304-27f5-4c67-82b2-343a8f2f2c51\") " pod="kube-system/kube-proxy-w9l8v" Mar 19 11:51:32.348217 kubelet[2869]: I0319 11:51:32.348083 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-etc-cni-netd\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348217 kubelet[2869]: I0319 11:51:32.348107 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36de7565-c8c1-49aa-80d1-a3b015d66662-clustermesh-secrets\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348217 kubelet[2869]: I0319 11:51:32.348122 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb39304-27f5-4c67-82b2-343a8f2f2c51-lib-modules\") pod \"kube-proxy-w9l8v\" (UID: \"bfb39304-27f5-4c67-82b2-343a8f2f2c51\") " pod="kube-system/kube-proxy-w9l8v" Mar 19 11:51:32.348650 kubelet[2869]: I0319 11:51:32.348135 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cni-path\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348650 kubelet[2869]: I0319 11:51:32.348148 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-config-path\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348650 kubelet[2869]: I0319 11:51:32.348161 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-cgroup\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348650 kubelet[2869]: I0319 11:51:32.348256 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-net\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348650 kubelet[2869]: I0319 11:51:32.348290 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cb6p\" (UniqueName: \"kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-kube-api-access-5cb6p\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348650 kubelet[2869]: I0319 11:51:32.348306 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-bpf-maps\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348837 kubelet[2869]: I0319 11:51:32.348320 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps425\" (UniqueName: \"kubernetes.io/projected/bfb39304-27f5-4c67-82b2-343a8f2f2c51-kube-api-access-ps425\") pod \"kube-proxy-w9l8v\" (UID: \"bfb39304-27f5-4c67-82b2-343a8f2f2c51\") " pod="kube-system/kube-proxy-w9l8v" Mar 19 11:51:32.348837 kubelet[2869]: I0319 11:51:32.348333 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-run\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348837 kubelet[2869]: I0319 11:51:32.348344 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-lib-modules\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348837 kubelet[2869]: I0319 11:51:32.348355 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-xtables-lock\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348837 kubelet[2869]: I0319 11:51:32.348367 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-kernel\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.348837 kubelet[2869]: I0319 11:51:32.348379 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-hubble-tls\") pod \"cilium-szzcf\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " pod="kube-system/cilium-szzcf" Mar 19 11:51:32.561618 kubelet[2869]: I0319 11:51:32.561205 2869 topology_manager.go:215] "Topology Admit Handler" podUID="4e42a6c2-0292-4a88-b689-e0644a14f755" podNamespace="kube-system" podName="cilium-operator-599987898-49kn6" Mar 19 11:51:32.568570 systemd[1]: Created slice kubepods-besteffort-pod4e42a6c2_0292_4a88_b689_e0644a14f755.slice - libcontainer container kubepods-besteffort-pod4e42a6c2_0292_4a88_b689_e0644a14f755.slice. Mar 19 11:51:32.581014 containerd[1569]: time="2025-03-19T11:51:32.580961720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9l8v,Uid:bfb39304-27f5-4c67-82b2-343a8f2f2c51,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:32.586690 containerd[1569]: time="2025-03-19T11:51:32.586495689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szzcf,Uid:36de7565-c8c1-49aa-80d1-a3b015d66662,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:32.594573 containerd[1569]: time="2025-03-19T11:51:32.593993035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:32.594573 containerd[1569]: time="2025-03-19T11:51:32.594033360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:32.594573 containerd[1569]: time="2025-03-19T11:51:32.594043548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:32.594573 containerd[1569]: time="2025-03-19T11:51:32.594082952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:32.605109 containerd[1569]: time="2025-03-19T11:51:32.604967699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:32.605109 containerd[1569]: time="2025-03-19T11:51:32.605045246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:32.605109 containerd[1569]: time="2025-03-19T11:51:32.605066558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:32.605205 containerd[1569]: time="2025-03-19T11:51:32.605131751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:32.606033 systemd[1]: Started cri-containerd-5a8996fb5242f3b9ef688fd1debdef3b54b87929b6aef060f2d3a139ed9e48b9.scope - libcontainer container 5a8996fb5242f3b9ef688fd1debdef3b54b87929b6aef060f2d3a139ed9e48b9. Mar 19 11:51:32.624869 systemd[1]: Started cri-containerd-d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896.scope - libcontainer container d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896. Mar 19 11:51:32.627459 containerd[1569]: time="2025-03-19T11:51:32.627236927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9l8v,Uid:bfb39304-27f5-4c67-82b2-343a8f2f2c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a8996fb5242f3b9ef688fd1debdef3b54b87929b6aef060f2d3a139ed9e48b9\"" Mar 19 11:51:32.642350 containerd[1569]: time="2025-03-19T11:51:32.642323533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szzcf,Uid:36de7565-c8c1-49aa-80d1-a3b015d66662,Namespace:kube-system,Attempt:0,} returns sandbox id \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\"" Mar 19 11:51:32.643377 containerd[1569]: time="2025-03-19T11:51:32.643324010Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:51:32.646126 containerd[1569]: time="2025-03-19T11:51:32.646108494Z" level=info msg="CreateContainer within sandbox \"5a8996fb5242f3b9ef688fd1debdef3b54b87929b6aef060f2d3a139ed9e48b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:51:32.651423 kubelet[2869]: I0319 11:51:32.651402 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchpw\" (UniqueName: \"kubernetes.io/projected/4e42a6c2-0292-4a88-b689-e0644a14f755-kube-api-access-xchpw\") pod \"cilium-operator-599987898-49kn6\" (UID: \"4e42a6c2-0292-4a88-b689-e0644a14f755\") " pod="kube-system/cilium-operator-599987898-49kn6" Mar 19 11:51:32.651523 kubelet[2869]: I0319 11:51:32.651502 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e42a6c2-0292-4a88-b689-e0644a14f755-cilium-config-path\") pod \"cilium-operator-599987898-49kn6\" (UID: \"4e42a6c2-0292-4a88-b689-e0644a14f755\") " pod="kube-system/cilium-operator-599987898-49kn6" Mar 19 11:51:32.659142 containerd[1569]: time="2025-03-19T11:51:32.659125954Z" level=info msg="CreateContainer within sandbox \"5a8996fb5242f3b9ef688fd1debdef3b54b87929b6aef060f2d3a139ed9e48b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9e6e0beeac8486d1404b03fc096b29e9c95973bbfde20d5b507b60c4079c62c\"" Mar 19 11:51:32.659389 containerd[1569]: time="2025-03-19T11:51:32.659374747Z" level=info msg="StartContainer for \"b9e6e0beeac8486d1404b03fc096b29e9c95973bbfde20d5b507b60c4079c62c\"" Mar 19 11:51:32.676843 systemd[1]: Started cri-containerd-b9e6e0beeac8486d1404b03fc096b29e9c95973bbfde20d5b507b60c4079c62c.scope - libcontainer container b9e6e0beeac8486d1404b03fc096b29e9c95973bbfde20d5b507b60c4079c62c. Mar 19 11:51:32.703421 containerd[1569]: time="2025-03-19T11:51:32.703396462Z" level=info msg="StartContainer for \"b9e6e0beeac8486d1404b03fc096b29e9c95973bbfde20d5b507b60c4079c62c\" returns successfully" Mar 19 11:51:32.871039 containerd[1569]: time="2025-03-19T11:51:32.871016553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-49kn6,Uid:4e42a6c2-0292-4a88-b689-e0644a14f755,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:32.886227 containerd[1569]: time="2025-03-19T11:51:32.886069628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:32.886227 containerd[1569]: time="2025-03-19T11:51:32.886109213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:32.886227 containerd[1569]: time="2025-03-19T11:51:32.886117048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:32.886227 containerd[1569]: time="2025-03-19T11:51:32.886167879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:32.897852 systemd[1]: Started cri-containerd-cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea.scope - libcontainer container cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea. Mar 19 11:51:32.927448 containerd[1569]: time="2025-03-19T11:51:32.927423410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-49kn6,Uid:4e42a6c2-0292-4a88-b689-e0644a14f755,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\"" Mar 19 11:51:33.615043 kubelet[2869]: I0319 11:51:33.614850 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9l8v" podStartSLOduration=1.6148246469999998 podStartE2EDuration="1.614824647s" podCreationTimestamp="2025-03-19 11:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:33.614325495 +0000 UTC m=+16.118831630" watchObservedRunningTime="2025-03-19 11:51:33.614824647 +0000 UTC m=+16.119330783" Mar 19 11:51:36.469382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370343030.mount: Deactivated successfully. Mar 19 11:51:41.282556 containerd[1569]: time="2025-03-19T11:51:41.282531642Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:41.283569 containerd[1569]: time="2025-03-19T11:51:41.283514734Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 19 11:51:41.284222 containerd[1569]: time="2025-03-19T11:51:41.283795777Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:41.284947 containerd[1569]: time="2025-03-19T11:51:41.284778703Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.641425088s" Mar 19 11:51:41.284947 containerd[1569]: time="2025-03-19T11:51:41.284802892Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 19 11:51:41.286907 containerd[1569]: time="2025-03-19T11:51:41.286893899Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:51:41.289874 containerd[1569]: time="2025-03-19T11:51:41.289776552Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:51:41.309556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32597126.mount: Deactivated successfully. Mar 19 11:51:41.323599 containerd[1569]: time="2025-03-19T11:51:41.323556146Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\"" Mar 19 11:51:41.324232 containerd[1569]: time="2025-03-19T11:51:41.323797383Z" level=info msg="StartContainer for \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\"" Mar 19 11:51:41.453867 systemd[1]: Started cri-containerd-e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a.scope - libcontainer container e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a. Mar 19 11:51:41.478863 containerd[1569]: time="2025-03-19T11:51:41.478833706Z" level=info msg="StartContainer for \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\" returns successfully" Mar 19 11:51:41.486197 systemd[1]: cri-containerd-e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a.scope: Deactivated successfully. Mar 19 11:51:42.009133 containerd[1569]: time="2025-03-19T11:51:41.999105644Z" level=info msg="shim disconnected" id=e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a namespace=k8s.io Mar 19 11:51:42.009133 containerd[1569]: time="2025-03-19T11:51:42.009131257Z" level=warning msg="cleaning up after shim disconnected" id=e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a namespace=k8s.io Mar 19 11:51:42.009257 containerd[1569]: time="2025-03-19T11:51:42.009140402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:51:42.306994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a-rootfs.mount: Deactivated successfully. Mar 19 11:51:42.659306 containerd[1569]: time="2025-03-19T11:51:42.659215282Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:51:42.673770 containerd[1569]: time="2025-03-19T11:51:42.673691301Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\"" Mar 19 11:51:42.687024 containerd[1569]: time="2025-03-19T11:51:42.686937583Z" level=info msg="StartContainer for \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\"" Mar 19 11:51:42.711868 systemd[1]: Started cri-containerd-3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f.scope - libcontainer container 3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f. Mar 19 11:51:42.756516 containerd[1569]: time="2025-03-19T11:51:42.756488649Z" level=info msg="StartContainer for \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\" returns successfully" Mar 19 11:51:42.768948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:51:42.769225 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:51:42.769373 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:51:42.771988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:51:42.774242 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:51:42.774672 systemd[1]: cri-containerd-3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f.scope: Deactivated successfully. Mar 19 11:51:42.796441 containerd[1569]: time="2025-03-19T11:51:42.796393565Z" level=info msg="shim disconnected" id=3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f namespace=k8s.io Mar 19 11:51:42.796441 containerd[1569]: time="2025-03-19T11:51:42.796435725Z" level=warning msg="cleaning up after shim disconnected" id=3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f namespace=k8s.io Mar 19 11:51:42.796441 containerd[1569]: time="2025-03-19T11:51:42.796441425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:51:42.821378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:51:43.256627 containerd[1569]: time="2025-03-19T11:51:43.256193238Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:43.256922 containerd[1569]: time="2025-03-19T11:51:43.256903022Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 19 11:51:43.257311 containerd[1569]: time="2025-03-19T11:51:43.257300042Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:51:43.260559 containerd[1569]: time="2025-03-19T11:51:43.258556545Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.971195509s" Mar 19 11:51:43.260559 containerd[1569]: time="2025-03-19T11:51:43.260556795Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 19 11:51:43.263335 containerd[1569]: time="2025-03-19T11:51:43.263263734Z" level=info msg="CreateContainer within sandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:51:43.307734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f-rootfs.mount: Deactivated successfully. Mar 19 11:51:43.321534 containerd[1569]: time="2025-03-19T11:51:43.321512043Z" level=info msg="CreateContainer within sandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\"" Mar 19 11:51:43.321908 containerd[1569]: time="2025-03-19T11:51:43.321893527Z" level=info msg="StartContainer for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\"" Mar 19 11:51:43.340930 systemd[1]: Started cri-containerd-e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b.scope - libcontainer container e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b. Mar 19 11:51:43.355940 containerd[1569]: time="2025-03-19T11:51:43.355915893Z" level=info msg="StartContainer for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" returns successfully" Mar 19 11:51:43.665766 containerd[1569]: time="2025-03-19T11:51:43.664796520Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:51:43.680907 containerd[1569]: time="2025-03-19T11:51:43.680834897Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\"" Mar 19 11:51:43.682114 containerd[1569]: time="2025-03-19T11:51:43.682096944Z" level=info msg="StartContainer for \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\"" Mar 19 11:51:43.719902 systemd[1]: Started cri-containerd-8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce.scope - libcontainer container 8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce. Mar 19 11:51:43.744396 kubelet[2869]: I0319 11:51:43.740161 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-49kn6" podStartSLOduration=1.406769479 podStartE2EDuration="11.740148855s" podCreationTimestamp="2025-03-19 11:51:32 +0000 UTC" firstStartedPulling="2025-03-19 11:51:32.928066465 +0000 UTC m=+15.432572591" lastFinishedPulling="2025-03-19 11:51:43.26144584 +0000 UTC m=+25.765951967" observedRunningTime="2025-03-19 11:51:43.684987848 +0000 UTC m=+26.189493983" watchObservedRunningTime="2025-03-19 11:51:43.740148855 +0000 UTC m=+26.244654991" Mar 19 11:51:43.769040 containerd[1569]: time="2025-03-19T11:51:43.769009212Z" level=info msg="StartContainer for \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\" returns successfully" Mar 19 11:51:43.779966 systemd[1]: cri-containerd-8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce.scope: Deactivated successfully. Mar 19 11:51:43.780145 systemd[1]: cri-containerd-8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce.scope: Consumed 12ms CPU time, 3.1M memory peak, 1.2M read from disk. Mar 19 11:51:44.066812 containerd[1569]: time="2025-03-19T11:51:44.065136349Z" level=info msg="shim disconnected" id=8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce namespace=k8s.io Mar 19 11:51:44.066812 containerd[1569]: time="2025-03-19T11:51:44.065171916Z" level=warning msg="cleaning up after shim disconnected" id=8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce namespace=k8s.io Mar 19 11:51:44.066812 containerd[1569]: time="2025-03-19T11:51:44.065177413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:51:44.307683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce-rootfs.mount: Deactivated successfully. Mar 19 11:51:44.668225 containerd[1569]: time="2025-03-19T11:51:44.668112965Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:51:44.685108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220170038.mount: Deactivated successfully. Mar 19 11:51:44.688596 containerd[1569]: time="2025-03-19T11:51:44.688572856Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\"" Mar 19 11:51:44.688967 containerd[1569]: time="2025-03-19T11:51:44.688949751Z" level=info msg="StartContainer for \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\"" Mar 19 11:51:44.709965 systemd[1]: Started cri-containerd-52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32.scope - libcontainer container 52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32. Mar 19 11:51:44.724053 containerd[1569]: time="2025-03-19T11:51:44.724030036Z" level=info msg="StartContainer for \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\" returns successfully" Mar 19 11:51:44.725449 systemd[1]: cri-containerd-52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32.scope: Deactivated successfully. Mar 19 11:51:44.738154 containerd[1569]: time="2025-03-19T11:51:44.737819090Z" level=info msg="shim disconnected" id=52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32 namespace=k8s.io Mar 19 11:51:44.738154 containerd[1569]: time="2025-03-19T11:51:44.737939344Z" level=warning msg="cleaning up after shim disconnected" id=52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32 namespace=k8s.io Mar 19 11:51:44.738154 containerd[1569]: time="2025-03-19T11:51:44.737946148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:51:45.308151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32-rootfs.mount: Deactivated successfully. Mar 19 11:51:45.671779 containerd[1569]: time="2025-03-19T11:51:45.671439314Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:51:45.683164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242246439.mount: Deactivated successfully. Mar 19 11:51:45.684396 containerd[1569]: time="2025-03-19T11:51:45.683968312Z" level=info msg="CreateContainer within sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\"" Mar 19 11:51:45.685688 containerd[1569]: time="2025-03-19T11:51:45.685674132Z" level=info msg="StartContainer for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\"" Mar 19 11:51:45.705971 systemd[1]: Started cri-containerd-348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8.scope - libcontainer container 348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8. Mar 19 11:51:45.723568 containerd[1569]: time="2025-03-19T11:51:45.723548796Z" level=info msg="StartContainer for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" returns successfully" Mar 19 11:51:45.942450 kubelet[2869]: I0319 11:51:45.941804 2869 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 19 11:51:45.970017 kubelet[2869]: I0319 11:51:45.967461 2869 topology_manager.go:215] "Topology Admit Handler" podUID="5479c27b-9f4d-44eb-a5ba-e1d240db0b85" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7hltb" Mar 19 11:51:45.970017 kubelet[2869]: I0319 11:51:45.967571 2869 topology_manager.go:215] "Topology Admit Handler" podUID="e27ec7c2-ff4f-4300-898e-289d03fc96c2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bzs62" Mar 19 11:51:45.997996 systemd[1]: Created slice kubepods-burstable-pode27ec7c2_ff4f_4300_898e_289d03fc96c2.slice - libcontainer container kubepods-burstable-pode27ec7c2_ff4f_4300_898e_289d03fc96c2.slice. Mar 19 11:51:46.011727 systemd[1]: Created slice kubepods-burstable-pod5479c27b_9f4d_44eb_a5ba_e1d240db0b85.slice - libcontainer container kubepods-burstable-pod5479c27b_9f4d_44eb_a5ba_e1d240db0b85.slice. Mar 19 11:51:46.036605 kubelet[2869]: I0319 11:51:46.036581 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmzs\" (UniqueName: \"kubernetes.io/projected/5479c27b-9f4d-44eb-a5ba-e1d240db0b85-kube-api-access-llmzs\") pod \"coredns-7db6d8ff4d-7hltb\" (UID: \"5479c27b-9f4d-44eb-a5ba-e1d240db0b85\") " pod="kube-system/coredns-7db6d8ff4d-7hltb" Mar 19 11:51:46.036605 kubelet[2869]: I0319 11:51:46.036603 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e27ec7c2-ff4f-4300-898e-289d03fc96c2-config-volume\") pod \"coredns-7db6d8ff4d-bzs62\" (UID: \"e27ec7c2-ff4f-4300-898e-289d03fc96c2\") " pod="kube-system/coredns-7db6d8ff4d-bzs62" Mar 19 11:51:46.036708 kubelet[2869]: I0319 11:51:46.036616 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5479c27b-9f4d-44eb-a5ba-e1d240db0b85-config-volume\") pod \"coredns-7db6d8ff4d-7hltb\" (UID: \"5479c27b-9f4d-44eb-a5ba-e1d240db0b85\") " pod="kube-system/coredns-7db6d8ff4d-7hltb" Mar 19 11:51:46.036708 kubelet[2869]: I0319 11:51:46.036626 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhv59\" (UniqueName: \"kubernetes.io/projected/e27ec7c2-ff4f-4300-898e-289d03fc96c2-kube-api-access-jhv59\") pod \"coredns-7db6d8ff4d-bzs62\" (UID: \"e27ec7c2-ff4f-4300-898e-289d03fc96c2\") " pod="kube-system/coredns-7db6d8ff4d-bzs62" Mar 19 11:51:46.311061 containerd[1569]: time="2025-03-19T11:51:46.310605162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bzs62,Uid:e27ec7c2-ff4f-4300-898e-289d03fc96c2,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:46.315116 containerd[1569]: time="2025-03-19T11:51:46.314965691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7hltb,Uid:5479c27b-9f4d-44eb-a5ba-e1d240db0b85,Namespace:kube-system,Attempt:0,}" Mar 19 11:51:46.679226 kubelet[2869]: I0319 11:51:46.679126 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-szzcf" podStartSLOduration=6.035429499 podStartE2EDuration="14.679106021s" podCreationTimestamp="2025-03-19 11:51:32 +0000 UTC" firstStartedPulling="2025-03-19 11:51:32.642902575 +0000 UTC m=+15.147408701" lastFinishedPulling="2025-03-19 11:51:41.286579096 +0000 UTC m=+23.791085223" observedRunningTime="2025-03-19 11:51:46.678884251 +0000 UTC m=+29.183390389" watchObservedRunningTime="2025-03-19 11:51:46.679106021 +0000 UTC m=+29.183612157" Mar 19 11:51:47.997576 systemd-networkd[1453]: cilium_host: Link UP Mar 19 11:51:47.997657 systemd-networkd[1453]: cilium_net: Link UP Mar 19 11:51:47.998724 systemd-networkd[1453]: cilium_net: Gained carrier Mar 19 11:51:47.998846 systemd-networkd[1453]: cilium_host: Gained carrier Mar 19 11:51:47.998912 systemd-networkd[1453]: cilium_net: Gained IPv6LL Mar 19 11:51:47.998990 systemd-networkd[1453]: cilium_host: Gained IPv6LL Mar 19 11:51:48.098049 systemd-networkd[1453]: cilium_vxlan: Link UP Mar 19 11:51:48.098054 systemd-networkd[1453]: cilium_vxlan: Gained carrier Mar 19 11:51:48.414887 kernel: NET: Registered PF_ALG protocol family Mar 19 11:51:48.785997 systemd-networkd[1453]: lxc_health: Link UP Mar 19 11:51:48.798914 systemd-networkd[1453]: lxc_health: Gained carrier Mar 19 11:51:49.363577 systemd-networkd[1453]: lxc9dffceb58cfc: Link UP Mar 19 11:51:49.366612 kernel: eth0: renamed from tmp7cd52 Mar 19 11:51:49.369903 systemd-networkd[1453]: cilium_vxlan: Gained IPv6LL Mar 19 11:51:49.383406 systemd-networkd[1453]: lxc9dffceb58cfc: Gained carrier Mar 19 11:51:49.384931 systemd-networkd[1453]: lxcc98ce7242ee4: Link UP Mar 19 11:51:49.387808 kernel: eth0: renamed from tmpa8907 Mar 19 11:51:49.392787 systemd-networkd[1453]: lxcc98ce7242ee4: Gained carrier Mar 19 11:51:50.649884 systemd-networkd[1453]: lxc_health: Gained IPv6LL Mar 19 11:51:50.905904 systemd-networkd[1453]: lxcc98ce7242ee4: Gained IPv6LL Mar 19 11:51:51.353871 systemd-networkd[1453]: lxc9dffceb58cfc: Gained IPv6LL Mar 19 11:51:51.747991 containerd[1569]: time="2025-03-19T11:51:51.747516499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:51.747991 containerd[1569]: time="2025-03-19T11:51:51.747553506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:51.747991 containerd[1569]: time="2025-03-19T11:51:51.747562800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:51.747991 containerd[1569]: time="2025-03-19T11:51:51.747609129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:51.770975 containerd[1569]: time="2025-03-19T11:51:51.770521691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:51:51.770975 containerd[1569]: time="2025-03-19T11:51:51.770558290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:51:51.770975 containerd[1569]: time="2025-03-19T11:51:51.770567495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:51.770975 containerd[1569]: time="2025-03-19T11:51:51.770620226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:51:51.775218 systemd[1]: Started cri-containerd-a8907506954a2c26664bad81c5045665f2d38396f37ab4fbfcc6f28560c24090.scope - libcontainer container a8907506954a2c26664bad81c5045665f2d38396f37ab4fbfcc6f28560c24090. Mar 19 11:51:51.793842 systemd[1]: Started cri-containerd-7cd526958b69b53fe84ec449047620acb51566a235c363c71b028d6fb4772977.scope - libcontainer container 7cd526958b69b53fe84ec449047620acb51566a235c363c71b028d6fb4772977. Mar 19 11:51:51.796392 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:51:51.809458 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:51:51.820443 containerd[1569]: time="2025-03-19T11:51:51.820397957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bzs62,Uid:e27ec7c2-ff4f-4300-898e-289d03fc96c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8907506954a2c26664bad81c5045665f2d38396f37ab4fbfcc6f28560c24090\"" Mar 19 11:51:51.822851 containerd[1569]: time="2025-03-19T11:51:51.822835592Z" level=info msg="CreateContainer within sandbox \"a8907506954a2c26664bad81c5045665f2d38396f37ab4fbfcc6f28560c24090\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:51:51.845972 containerd[1569]: time="2025-03-19T11:51:51.845946511Z" level=info msg="CreateContainer within sandbox \"a8907506954a2c26664bad81c5045665f2d38396f37ab4fbfcc6f28560c24090\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d3c7ceccb4e70d898e73d967eeeb1559824490283a5bfec1875d08a6a478407\"" Mar 19 11:51:51.847499 containerd[1569]: time="2025-03-19T11:51:51.846987923Z" level=info msg="StartContainer for \"7d3c7ceccb4e70d898e73d967eeeb1559824490283a5bfec1875d08a6a478407\"" Mar 19 11:51:51.851705 containerd[1569]: time="2025-03-19T11:51:51.851688183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7hltb,Uid:5479c27b-9f4d-44eb-a5ba-e1d240db0b85,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cd526958b69b53fe84ec449047620acb51566a235c363c71b028d6fb4772977\"" Mar 19 11:51:51.855625 containerd[1569]: time="2025-03-19T11:51:51.855602069Z" level=info msg="CreateContainer within sandbox \"7cd526958b69b53fe84ec449047620acb51566a235c363c71b028d6fb4772977\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:51:51.864121 containerd[1569]: time="2025-03-19T11:51:51.864099603Z" level=info msg="CreateContainer within sandbox \"7cd526958b69b53fe84ec449047620acb51566a235c363c71b028d6fb4772977\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d519bbfe6bdce9f5f01c1c8fc9e4de627dcaf545e8d6aa52d568db7e0c2f48f5\"" Mar 19 11:51:51.864836 containerd[1569]: time="2025-03-19T11:51:51.864814553Z" level=info msg="StartContainer for \"d519bbfe6bdce9f5f01c1c8fc9e4de627dcaf545e8d6aa52d568db7e0c2f48f5\"" Mar 19 11:51:51.874853 systemd[1]: Started cri-containerd-7d3c7ceccb4e70d898e73d967eeeb1559824490283a5bfec1875d08a6a478407.scope - libcontainer container 7d3c7ceccb4e70d898e73d967eeeb1559824490283a5bfec1875d08a6a478407. Mar 19 11:51:51.884838 systemd[1]: Started cri-containerd-d519bbfe6bdce9f5f01c1c8fc9e4de627dcaf545e8d6aa52d568db7e0c2f48f5.scope - libcontainer container d519bbfe6bdce9f5f01c1c8fc9e4de627dcaf545e8d6aa52d568db7e0c2f48f5. Mar 19 11:51:51.901911 containerd[1569]: time="2025-03-19T11:51:51.901851019Z" level=info msg="StartContainer for \"7d3c7ceccb4e70d898e73d967eeeb1559824490283a5bfec1875d08a6a478407\" returns successfully" Mar 19 11:51:51.905144 containerd[1569]: time="2025-03-19T11:51:51.905124399Z" level=info msg="StartContainer for \"d519bbfe6bdce9f5f01c1c8fc9e4de627dcaf545e8d6aa52d568db7e0c2f48f5\" returns successfully" Mar 19 11:51:52.689968 kubelet[2869]: I0319 11:51:52.689904 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7hltb" podStartSLOduration=20.689889909 podStartE2EDuration="20.689889909s" podCreationTimestamp="2025-03-19 11:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:52.68849209 +0000 UTC m=+35.192998233" watchObservedRunningTime="2025-03-19 11:51:52.689889909 +0000 UTC m=+35.194396046" Mar 19 11:51:52.697810 kubelet[2869]: I0319 11:51:52.697371 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bzs62" podStartSLOduration=20.697362145 podStartE2EDuration="20.697362145s" podCreationTimestamp="2025-03-19 11:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:51:52.697107345 +0000 UTC m=+35.201613490" watchObservedRunningTime="2025-03-19 11:51:52.697362145 +0000 UTC m=+35.201868282" Mar 19 11:51:52.755118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683361517.mount: Deactivated successfully. Mar 19 11:51:55.102430 systemd[1]: Started sshd@7-139.178.70.109:22-8.146.204.142:35666.service - OpenSSH per-connection server daemon (8.146.204.142:35666). Mar 19 11:51:58.002391 kubelet[2869]: I0319 11:51:58.002275 2869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 11:52:16.624583 systemd[1]: Started sshd@8-139.178.70.109:22-147.75.109.163:40796.service - OpenSSH per-connection server daemon (147.75.109.163:40796). Mar 19 11:52:16.660313 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 40796 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:16.661556 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:16.665580 systemd-logind[1539]: New session 10 of user core. Mar 19 11:52:16.672912 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:52:17.074500 sshd[4251]: Connection closed by 147.75.109.163 port 40796 Mar 19 11:52:17.075111 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:17.077074 systemd[1]: sshd@8-139.178.70.109:22-147.75.109.163:40796.service: Deactivated successfully. Mar 19 11:52:17.078231 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:52:17.078674 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:52:17.079232 systemd-logind[1539]: Removed session 10. Mar 19 11:52:22.011684 sshd[4243]: banner exchange: Connection from 8.146.204.142 port 35666: invalid format Mar 19 11:52:22.012199 systemd[1]: sshd@7-139.178.70.109:22-8.146.204.142:35666.service: Deactivated successfully. Mar 19 11:52:22.085655 systemd[1]: Started sshd@9-139.178.70.109:22-147.75.109.163:40798.service - OpenSSH per-connection server daemon (147.75.109.163:40798). Mar 19 11:52:22.127150 sshd[4269]: Accepted publickey for core from 147.75.109.163 port 40798 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:22.128429 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:22.132303 systemd-logind[1539]: New session 11 of user core. Mar 19 11:52:22.136874 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:52:22.232017 sshd[4271]: Connection closed by 147.75.109.163 port 40798 Mar 19 11:52:22.232364 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:22.234380 systemd[1]: sshd@9-139.178.70.109:22-147.75.109.163:40798.service: Deactivated successfully. Mar 19 11:52:22.235436 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:52:22.235874 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:52:22.236526 systemd-logind[1539]: Removed session 11. Mar 19 11:52:27.247956 systemd[1]: Started sshd@10-139.178.70.109:22-147.75.109.163:57152.service - OpenSSH per-connection server daemon (147.75.109.163:57152). Mar 19 11:52:27.282178 sshd[4284]: Accepted publickey for core from 147.75.109.163 port 57152 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:27.283337 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:27.286888 systemd-logind[1539]: New session 12 of user core. Mar 19 11:52:27.291861 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:52:27.383412 sshd[4286]: Connection closed by 147.75.109.163 port 57152 Mar 19 11:52:27.383727 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:27.385663 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:52:27.386107 systemd[1]: sshd@10-139.178.70.109:22-147.75.109.163:57152.service: Deactivated successfully. Mar 19 11:52:27.387194 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:52:27.387959 systemd-logind[1539]: Removed session 12. Mar 19 11:52:32.393918 systemd[1]: Started sshd@11-139.178.70.109:22-147.75.109.163:57166.service - OpenSSH per-connection server daemon (147.75.109.163:57166). Mar 19 11:52:32.428690 sshd[4299]: Accepted publickey for core from 147.75.109.163 port 57166 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:32.429443 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:32.433361 systemd-logind[1539]: New session 13 of user core. Mar 19 11:52:32.439859 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:52:32.535161 sshd[4301]: Connection closed by 147.75.109.163 port 57166 Mar 19 11:52:32.535578 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:32.542778 systemd[1]: sshd@11-139.178.70.109:22-147.75.109.163:57166.service: Deactivated successfully. Mar 19 11:52:32.544019 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:52:32.544566 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:52:32.552932 systemd[1]: Started sshd@12-139.178.70.109:22-147.75.109.163:57178.service - OpenSSH per-connection server daemon (147.75.109.163:57178). Mar 19 11:52:32.554031 systemd-logind[1539]: Removed session 13. Mar 19 11:52:32.589614 sshd[4313]: Accepted publickey for core from 147.75.109.163 port 57178 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:32.590380 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:32.593771 systemd-logind[1539]: New session 14 of user core. Mar 19 11:52:32.599846 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:52:32.730889 sshd[4316]: Connection closed by 147.75.109.163 port 57178 Mar 19 11:52:32.730802 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:32.743487 systemd[1]: sshd@12-139.178.70.109:22-147.75.109.163:57178.service: Deactivated successfully. Mar 19 11:52:32.746038 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:52:32.746998 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:52:32.754099 systemd[1]: Started sshd@13-139.178.70.109:22-147.75.109.163:57188.service - OpenSSH per-connection server daemon (147.75.109.163:57188). Mar 19 11:52:32.755944 systemd-logind[1539]: Removed session 14. Mar 19 11:52:32.789472 sshd[4325]: Accepted publickey for core from 147.75.109.163 port 57188 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:32.790054 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:32.792807 systemd-logind[1539]: New session 15 of user core. Mar 19 11:52:32.799968 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:52:32.906510 sshd[4328]: Connection closed by 147.75.109.163 port 57188 Mar 19 11:52:32.906315 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:32.913869 systemd[1]: sshd@13-139.178.70.109:22-147.75.109.163:57188.service: Deactivated successfully. Mar 19 11:52:32.915008 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:52:32.915513 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:52:32.916041 systemd-logind[1539]: Removed session 15. Mar 19 11:52:37.917170 systemd[1]: Started sshd@14-139.178.70.109:22-147.75.109.163:46854.service - OpenSSH per-connection server daemon (147.75.109.163:46854). Mar 19 11:52:37.954627 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 46854 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:37.955681 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:37.958530 systemd-logind[1539]: New session 16 of user core. Mar 19 11:52:37.968025 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:52:38.053399 sshd[4346]: Connection closed by 147.75.109.163 port 46854 Mar 19 11:52:38.053741 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:38.055595 systemd[1]: sshd@14-139.178.70.109:22-147.75.109.163:46854.service: Deactivated successfully. Mar 19 11:52:38.056745 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:52:38.057551 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:52:38.058194 systemd-logind[1539]: Removed session 16. Mar 19 11:52:38.261783 systemd[1]: Started sshd@15-139.178.70.109:22-8.146.204.142:56850.service - OpenSSH per-connection server daemon (8.146.204.142:56850). Mar 19 11:52:43.065099 systemd[1]: Started sshd@16-139.178.70.109:22-147.75.109.163:46864.service - OpenSSH per-connection server daemon (147.75.109.163:46864). Mar 19 11:52:43.097628 sshd[4360]: Accepted publickey for core from 147.75.109.163 port 46864 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:43.098520 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:43.101890 systemd-logind[1539]: New session 17 of user core. Mar 19 11:52:43.106944 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:52:43.194558 sshd[4362]: Connection closed by 147.75.109.163 port 46864 Mar 19 11:52:43.194988 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:43.204021 systemd[1]: sshd@16-139.178.70.109:22-147.75.109.163:46864.service: Deactivated successfully. Mar 19 11:52:43.205103 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:52:43.205553 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:52:43.209985 systemd[1]: Started sshd@17-139.178.70.109:22-147.75.109.163:46866.service - OpenSSH per-connection server daemon (147.75.109.163:46866). Mar 19 11:52:43.211190 systemd-logind[1539]: Removed session 17. Mar 19 11:52:43.239906 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 46866 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:43.240946 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:43.245053 systemd-logind[1539]: New session 18 of user core. Mar 19 11:52:43.255851 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:52:43.593661 sshd[4376]: Connection closed by 147.75.109.163 port 46866 Mar 19 11:52:43.594616 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:43.599975 systemd[1]: sshd@17-139.178.70.109:22-147.75.109.163:46866.service: Deactivated successfully. Mar 19 11:52:43.600878 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:52:43.601298 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:52:43.602307 systemd[1]: Started sshd@18-139.178.70.109:22-147.75.109.163:46882.service - OpenSSH per-connection server daemon (147.75.109.163:46882). Mar 19 11:52:43.603225 systemd-logind[1539]: Removed session 18. Mar 19 11:52:43.652595 sshd[4385]: Accepted publickey for core from 147.75.109.163 port 46882 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:43.653553 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:43.656716 systemd-logind[1539]: New session 19 of user core. Mar 19 11:52:43.665835 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:52:44.916579 sshd[4388]: Connection closed by 147.75.109.163 port 46882 Mar 19 11:52:44.918173 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:44.928628 systemd[1]: Started sshd@19-139.178.70.109:22-147.75.109.163:40326.service - OpenSSH per-connection server daemon (147.75.109.163:40326). Mar 19 11:52:44.934764 systemd[1]: sshd@18-139.178.70.109:22-147.75.109.163:46882.service: Deactivated successfully. Mar 19 11:52:44.936344 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:52:44.939751 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:52:44.940679 systemd-logind[1539]: Removed session 19. Mar 19 11:52:44.978052 sshd[4403]: Accepted publickey for core from 147.75.109.163 port 40326 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:44.978880 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:44.982178 systemd-logind[1539]: New session 20 of user core. Mar 19 11:52:44.992968 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:52:45.237094 sshd[4408]: Connection closed by 147.75.109.163 port 40326 Mar 19 11:52:45.236466 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:45.244458 systemd[1]: sshd@19-139.178.70.109:22-147.75.109.163:40326.service: Deactivated successfully. Mar 19 11:52:45.245970 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:52:45.247090 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:52:45.252082 systemd[1]: Started sshd@20-139.178.70.109:22-147.75.109.163:40342.service - OpenSSH per-connection server daemon (147.75.109.163:40342). Mar 19 11:52:45.253935 systemd-logind[1539]: Removed session 20. Mar 19 11:52:45.283272 sshd[4417]: Accepted publickey for core from 147.75.109.163 port 40342 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:45.284112 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:45.288944 systemd-logind[1539]: New session 21 of user core. Mar 19 11:52:45.291878 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:52:45.383116 sshd[4420]: Connection closed by 147.75.109.163 port 40342 Mar 19 11:52:45.383539 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:45.385456 systemd[1]: sshd@20-139.178.70.109:22-147.75.109.163:40342.service: Deactivated successfully. Mar 19 11:52:45.386542 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:52:45.387034 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:52:45.387563 systemd-logind[1539]: Removed session 21. Mar 19 11:52:50.394512 systemd[1]: Started sshd@21-139.178.70.109:22-147.75.109.163:40344.service - OpenSSH per-connection server daemon (147.75.109.163:40344). Mar 19 11:52:50.428600 sshd[4436]: Accepted publickey for core from 147.75.109.163 port 40344 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:50.429568 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:50.432808 systemd-logind[1539]: New session 22 of user core. Mar 19 11:52:50.439916 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:52:50.530088 sshd[4438]: Connection closed by 147.75.109.163 port 40344 Mar 19 11:52:50.529814 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:50.531365 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:52:50.531520 systemd[1]: sshd@21-139.178.70.109:22-147.75.109.163:40344.service: Deactivated successfully. Mar 19 11:52:50.532841 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:52:50.533697 systemd-logind[1539]: Removed session 22. Mar 19 11:52:55.539405 systemd[1]: Started sshd@22-139.178.70.109:22-147.75.109.163:54908.service - OpenSSH per-connection server daemon (147.75.109.163:54908). Mar 19 11:52:55.573316 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 54908 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:52:55.575656 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:52:55.579468 systemd-logind[1539]: New session 23 of user core. Mar 19 11:52:55.586942 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:52:55.678723 sshd[4451]: Connection closed by 147.75.109.163 port 54908 Mar 19 11:52:55.679875 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Mar 19 11:52:55.681805 systemd[1]: sshd@22-139.178.70.109:22-147.75.109.163:54908.service: Deactivated successfully. Mar 19 11:52:55.682860 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:52:55.683303 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:52:55.683965 systemd-logind[1539]: Removed session 23. Mar 19 11:53:00.687652 systemd[1]: Started sshd@23-139.178.70.109:22-147.75.109.163:54924.service - OpenSSH per-connection server daemon (147.75.109.163:54924). Mar 19 11:53:00.729711 sshd[4462]: Accepted publickey for core from 147.75.109.163 port 54924 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:53:00.730624 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:00.734075 systemd-logind[1539]: New session 24 of user core. Mar 19 11:53:00.745865 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:53:00.833501 sshd[4464]: Connection closed by 147.75.109.163 port 54924 Mar 19 11:53:00.833834 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:00.840872 systemd[1]: sshd@23-139.178.70.109:22-147.75.109.163:54924.service: Deactivated successfully. Mar 19 11:53:00.841780 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:53:00.842560 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:53:00.843325 systemd[1]: Started sshd@24-139.178.70.109:22-147.75.109.163:54940.service - OpenSSH per-connection server daemon (147.75.109.163:54940). Mar 19 11:53:00.844236 systemd-logind[1539]: Removed session 24. Mar 19 11:53:00.885390 sshd[4475]: Accepted publickey for core from 147.75.109.163 port 54940 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:53:00.886266 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:00.889980 systemd-logind[1539]: New session 25 of user core. Mar 19 11:53:00.896924 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:53:02.222511 containerd[1569]: time="2025-03-19T11:53:02.222460439Z" level=info msg="StopContainer for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" with timeout 30 (s)" Mar 19 11:53:02.227010 containerd[1569]: time="2025-03-19T11:53:02.226938054Z" level=info msg="Stop container \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" with signal terminated" Mar 19 11:53:02.246914 systemd[1]: cri-containerd-e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b.scope: Deactivated successfully. Mar 19 11:53:02.260563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b-rootfs.mount: Deactivated successfully. Mar 19 11:53:02.261418 containerd[1569]: time="2025-03-19T11:53:02.261381520Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:53:02.267486 containerd[1569]: time="2025-03-19T11:53:02.267449713Z" level=info msg="shim disconnected" id=e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b namespace=k8s.io Mar 19 11:53:02.267486 containerd[1569]: time="2025-03-19T11:53:02.267483338Z" level=warning msg="cleaning up after shim disconnected" id=e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b namespace=k8s.io Mar 19 11:53:02.267486 containerd[1569]: time="2025-03-19T11:53:02.267488760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:02.269179 containerd[1569]: time="2025-03-19T11:53:02.269162588Z" level=info msg="StopContainer for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" with timeout 2 (s)" Mar 19 11:53:02.269428 containerd[1569]: time="2025-03-19T11:53:02.269414062Z" level=info msg="Stop container \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" with signal terminated" Mar 19 11:53:02.275872 systemd-networkd[1453]: lxc_health: Link DOWN Mar 19 11:53:02.275877 systemd-networkd[1453]: lxc_health: Lost carrier Mar 19 11:53:02.280499 containerd[1569]: time="2025-03-19T11:53:02.280469632Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:53:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:53:02.282408 containerd[1569]: time="2025-03-19T11:53:02.282389316Z" level=info msg="StopContainer for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" returns successfully" Mar 19 11:53:02.283909 containerd[1569]: time="2025-03-19T11:53:02.283822934Z" level=info msg="StopPodSandbox for \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\"" Mar 19 11:53:02.287328 containerd[1569]: time="2025-03-19T11:53:02.287280548Z" level=info msg="Container to stop \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:53:02.288599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea-shm.mount: Deactivated successfully. Mar 19 11:53:02.290458 systemd[1]: cri-containerd-348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8.scope: Deactivated successfully. Mar 19 11:53:02.290708 systemd[1]: cri-containerd-348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8.scope: Consumed 4.120s CPU time, 202.9M memory peak, 79.1M read from disk, 13.3M written to disk. Mar 19 11:53:02.299107 systemd[1]: cri-containerd-cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea.scope: Deactivated successfully. Mar 19 11:53:02.306032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8-rootfs.mount: Deactivated successfully. Mar 19 11:53:02.310568 containerd[1569]: time="2025-03-19T11:53:02.310532595Z" level=info msg="shim disconnected" id=348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8 namespace=k8s.io Mar 19 11:53:02.310568 containerd[1569]: time="2025-03-19T11:53:02.310562593Z" level=warning msg="cleaning up after shim disconnected" id=348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8 namespace=k8s.io Mar 19 11:53:02.310568 containerd[1569]: time="2025-03-19T11:53:02.310567885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:02.318379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea-rootfs.mount: Deactivated successfully. Mar 19 11:53:02.320023 containerd[1569]: time="2025-03-19T11:53:02.319989950Z" level=info msg="shim disconnected" id=cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea namespace=k8s.io Mar 19 11:53:02.320126 containerd[1569]: time="2025-03-19T11:53:02.320116695Z" level=warning msg="cleaning up after shim disconnected" id=cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea namespace=k8s.io Mar 19 11:53:02.320162 containerd[1569]: time="2025-03-19T11:53:02.320155196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:02.323077 containerd[1569]: time="2025-03-19T11:53:02.323054338Z" level=info msg="StopContainer for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" returns successfully" Mar 19 11:53:02.323503 containerd[1569]: time="2025-03-19T11:53:02.323353687Z" level=info msg="StopPodSandbox for \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\"" Mar 19 11:53:02.326630 containerd[1569]: time="2025-03-19T11:53:02.323369880Z" level=info msg="Container to stop \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:53:02.326713 containerd[1569]: time="2025-03-19T11:53:02.326704183Z" level=info msg="Container to stop \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:53:02.326769 containerd[1569]: time="2025-03-19T11:53:02.326748357Z" level=info msg="Container to stop \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:53:02.326809 containerd[1569]: time="2025-03-19T11:53:02.326801111Z" level=info msg="Container to stop \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:53:02.326910 containerd[1569]: time="2025-03-19T11:53:02.326835473Z" level=info msg="Container to stop \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:53:02.330533 systemd[1]: cri-containerd-d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896.scope: Deactivated successfully. Mar 19 11:53:02.331182 containerd[1569]: time="2025-03-19T11:53:02.331033858Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:53:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:53:02.333365 containerd[1569]: time="2025-03-19T11:53:02.333057772Z" level=info msg="TearDown network for sandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" successfully" Mar 19 11:53:02.333365 containerd[1569]: time="2025-03-19T11:53:02.333071375Z" level=info msg="StopPodSandbox for \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" returns successfully" Mar 19 11:53:02.351284 containerd[1569]: time="2025-03-19T11:53:02.351172978Z" level=info msg="shim disconnected" id=d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896 namespace=k8s.io Mar 19 11:53:02.351284 containerd[1569]: time="2025-03-19T11:53:02.351205427Z" level=warning msg="cleaning up after shim disconnected" id=d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896 namespace=k8s.io Mar 19 11:53:02.351284 containerd[1569]: time="2025-03-19T11:53:02.351210633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:02.360488 containerd[1569]: time="2025-03-19T11:53:02.360460402Z" level=info msg="TearDown network for sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" successfully" Mar 19 11:53:02.360488 containerd[1569]: time="2025-03-19T11:53:02.360481162Z" level=info msg="StopPodSandbox for \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" returns successfully" Mar 19 11:53:02.387703 kubelet[2869]: I0319 11:53:02.387681 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-xtables-lock\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.387703 kubelet[2869]: I0319 11:53:02.387705 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-kernel\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.387999 kubelet[2869]: I0319 11:53:02.387719 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cni-path\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.387999 kubelet[2869]: I0319 11:53:02.387728 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-run\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.387999 kubelet[2869]: I0319 11:53:02.387737 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-net\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.387999 kubelet[2869]: I0319 11:53:02.387752 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e42a6c2-0292-4a88-b689-e0644a14f755-cilium-config-path\") pod \"4e42a6c2-0292-4a88-b689-e0644a14f755\" (UID: \"4e42a6c2-0292-4a88-b689-e0644a14f755\") " Mar 19 11:53:02.387999 kubelet[2869]: I0319 11:53:02.387776 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-lib-modules\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.387999 kubelet[2869]: I0319 11:53:02.387787 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-config-path\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388109 kubelet[2869]: I0319 11:53:02.387797 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-hubble-tls\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388109 kubelet[2869]: I0319 11:53:02.387808 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-etc-cni-netd\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388109 kubelet[2869]: I0319 11:53:02.387816 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cb6p\" (UniqueName: \"kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-kube-api-access-5cb6p\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388109 kubelet[2869]: I0319 11:53:02.387825 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-bpf-maps\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388109 kubelet[2869]: I0319 11:53:02.387834 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-cgroup\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388109 kubelet[2869]: I0319 11:53:02.387842 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-hostproc\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388207 kubelet[2869]: I0319 11:53:02.387851 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36de7565-c8c1-49aa-80d1-a3b015d66662-clustermesh-secrets\") pod \"36de7565-c8c1-49aa-80d1-a3b015d66662\" (UID: \"36de7565-c8c1-49aa-80d1-a3b015d66662\") " Mar 19 11:53:02.388207 kubelet[2869]: I0319 11:53:02.387860 2869 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchpw\" (UniqueName: \"kubernetes.io/projected/4e42a6c2-0292-4a88-b689-e0644a14f755-kube-api-access-xchpw\") pod \"4e42a6c2-0292-4a88-b689-e0644a14f755\" (UID: \"4e42a6c2-0292-4a88-b689-e0644a14f755\") " Mar 19 11:53:02.396300 kubelet[2869]: I0319 11:53:02.394224 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:02.396300 kubelet[2869]: I0319 11:53:02.395202 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.396300 kubelet[2869]: I0319 11:53:02.395216 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.396300 kubelet[2869]: I0319 11:53:02.395228 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cni-path" (OuterVolumeSpecName: "cni-path") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.396452 kubelet[2869]: I0319 11:53:02.396436 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e42a6c2-0292-4a88-b689-e0644a14f755-kube-api-access-xchpw" (OuterVolumeSpecName: "kube-api-access-xchpw") pod "4e42a6c2-0292-4a88-b689-e0644a14f755" (UID: "4e42a6c2-0292-4a88-b689-e0644a14f755"). InnerVolumeSpecName "kube-api-access-xchpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:53:02.396610 kubelet[2869]: I0319 11:53:02.396596 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.396638 kubelet[2869]: I0319 11:53:02.396618 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.397255 kubelet[2869]: I0319 11:53:02.397240 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:53:02.397283 kubelet[2869]: I0319 11:53:02.397261 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.398154 kubelet[2869]: I0319 11:53:02.397972 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e42a6c2-0292-4a88-b689-e0644a14f755-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e42a6c2-0292-4a88-b689-e0644a14f755" (UID: "4e42a6c2-0292-4a88-b689-e0644a14f755"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:53:02.398154 kubelet[2869]: I0319 11:53:02.397992 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.398154 kubelet[2869]: I0319 11:53:02.398005 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.398154 kubelet[2869]: I0319 11:53:02.398016 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.398154 kubelet[2869]: I0319 11:53:02.398026 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-hostproc" (OuterVolumeSpecName: "hostproc") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:53:02.399084 kubelet[2869]: I0319 11:53:02.399061 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-kube-api-access-5cb6p" (OuterVolumeSpecName: "kube-api-access-5cb6p") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "kube-api-access-5cb6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:53:02.399529 kubelet[2869]: I0319 11:53:02.399514 2869 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36de7565-c8c1-49aa-80d1-a3b015d66662-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "36de7565-c8c1-49aa-80d1-a3b015d66662" (UID: "36de7565-c8c1-49aa-80d1-a3b015d66662"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490288 2869 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490316 2869 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490324 2869 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490331 2869 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e42a6c2-0292-4a88-b689-e0644a14f755-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490337 2869 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490344 2869 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490349 2869 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490363 kubelet[2869]: I0319 11:53:02.490357 2869 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490363 2869 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5cb6p\" (UniqueName: \"kubernetes.io/projected/36de7565-c8c1-49aa-80d1-a3b015d66662-kube-api-access-5cb6p\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490369 2869 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490377 2869 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490382 2869 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490388 2869 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36de7565-c8c1-49aa-80d1-a3b015d66662-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490394 2869 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xchpw\" (UniqueName: \"kubernetes.io/projected/4e42a6c2-0292-4a88-b689-e0644a14f755-kube-api-access-xchpw\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490400 2869 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.490648 kubelet[2869]: I0319 11:53:02.490405 2869 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36de7565-c8c1-49aa-80d1-a3b015d66662-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 19 11:53:02.623167 kubelet[2869]: E0319 11:53:02.623131 2869 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:53:02.795220 systemd[1]: Removed slice kubepods-besteffort-pod4e42a6c2_0292_4a88_b689_e0644a14f755.slice - libcontainer container kubepods-besteffort-pod4e42a6c2_0292_4a88_b689_e0644a14f755.slice. Mar 19 11:53:02.801239 kubelet[2869]: I0319 11:53:02.801214 2869 scope.go:117] "RemoveContainer" containerID="e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b" Mar 19 11:53:02.812302 containerd[1569]: time="2025-03-19T11:53:02.812268962Z" level=info msg="RemoveContainer for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\"" Mar 19 11:53:02.813954 systemd[1]: Removed slice kubepods-burstable-pod36de7565_c8c1_49aa_80d1_a3b015d66662.slice - libcontainer container kubepods-burstable-pod36de7565_c8c1_49aa_80d1_a3b015d66662.slice. Mar 19 11:53:02.814719 systemd[1]: kubepods-burstable-pod36de7565_c8c1_49aa_80d1_a3b015d66662.slice: Consumed 4.167s CPU time, 203.9M memory peak, 80.5M read from disk, 13.3M written to disk. Mar 19 11:53:02.816185 containerd[1569]: time="2025-03-19T11:53:02.815792203Z" level=info msg="RemoveContainer for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" returns successfully" Mar 19 11:53:02.816638 kubelet[2869]: I0319 11:53:02.816544 2869 scope.go:117] "RemoveContainer" containerID="e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b" Mar 19 11:53:02.817274 containerd[1569]: time="2025-03-19T11:53:02.817235530Z" level=error msg="ContainerStatus for \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\": not found" Mar 19 11:53:02.823211 kubelet[2869]: E0319 11:53:02.823188 2869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\": not found" containerID="e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b" Mar 19 11:53:02.827369 kubelet[2869]: I0319 11:53:02.823219 2869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b"} err="failed to get container status \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e435ec6b41e566c7892e0435f848fe04229d26f7a99cedc3ea6fd004261aca8b\": not found" Mar 19 11:53:02.827402 kubelet[2869]: I0319 11:53:02.827369 2869 scope.go:117] "RemoveContainer" containerID="348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8" Mar 19 11:53:02.829150 containerd[1569]: time="2025-03-19T11:53:02.828881618Z" level=info msg="RemoveContainer for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\"" Mar 19 11:53:02.830393 containerd[1569]: time="2025-03-19T11:53:02.830355375Z" level=info msg="RemoveContainer for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" returns successfully" Mar 19 11:53:02.830532 kubelet[2869]: I0319 11:53:02.830516 2869 scope.go:117] "RemoveContainer" containerID="52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32" Mar 19 11:53:02.831054 containerd[1569]: time="2025-03-19T11:53:02.831038112Z" level=info msg="RemoveContainer for \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\"" Mar 19 11:53:02.833585 containerd[1569]: time="2025-03-19T11:53:02.833565962Z" level=info msg="RemoveContainer for \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\" returns successfully" Mar 19 11:53:02.833836 kubelet[2869]: I0319 11:53:02.833775 2869 scope.go:117] "RemoveContainer" containerID="8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce" Mar 19 11:53:02.834676 containerd[1569]: time="2025-03-19T11:53:02.834658052Z" level=info msg="RemoveContainer for \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\"" Mar 19 11:53:02.835872 containerd[1569]: time="2025-03-19T11:53:02.835853447Z" level=info msg="RemoveContainer for \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\" returns successfully" Mar 19 11:53:02.835994 kubelet[2869]: I0319 11:53:02.835981 2869 scope.go:117] "RemoveContainer" containerID="3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f" Mar 19 11:53:02.836512 containerd[1569]: time="2025-03-19T11:53:02.836498236Z" level=info msg="RemoveContainer for \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\"" Mar 19 11:53:02.837505 containerd[1569]: time="2025-03-19T11:53:02.837491128Z" level=info msg="RemoveContainer for \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\" returns successfully" Mar 19 11:53:02.837588 kubelet[2869]: I0319 11:53:02.837563 2869 scope.go:117] "RemoveContainer" containerID="e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a" Mar 19 11:53:02.838281 containerd[1569]: time="2025-03-19T11:53:02.838098181Z" level=info msg="RemoveContainer for \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\"" Mar 19 11:53:02.839068 containerd[1569]: time="2025-03-19T11:53:02.839056364Z" level=info msg="RemoveContainer for \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\" returns successfully" Mar 19 11:53:02.839171 kubelet[2869]: I0319 11:53:02.839160 2869 scope.go:117] "RemoveContainer" containerID="348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8" Mar 19 11:53:02.839297 containerd[1569]: time="2025-03-19T11:53:02.839247501Z" level=error msg="ContainerStatus for \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\": not found" Mar 19 11:53:02.839326 kubelet[2869]: E0319 11:53:02.839312 2869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\": not found" containerID="348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8" Mar 19 11:53:02.839405 kubelet[2869]: I0319 11:53:02.839324 2869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8"} err="failed to get container status \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\": rpc error: code = NotFound desc = an error occurred when try to find container \"348d2e5f9beaca757ee1b8d7671d181ba72d5dcc474de60d745a95222fd69af8\": not found" Mar 19 11:53:02.839405 kubelet[2869]: I0319 11:53:02.839335 2869 scope.go:117] "RemoveContainer" containerID="52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32" Mar 19 11:53:02.839562 containerd[1569]: time="2025-03-19T11:53:02.839512535Z" level=error msg="ContainerStatus for \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\": not found" Mar 19 11:53:02.839590 kubelet[2869]: E0319 11:53:02.839566 2869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\": not found" containerID="52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32" Mar 19 11:53:02.839590 kubelet[2869]: I0319 11:53:02.839577 2869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32"} err="failed to get container status \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\": rpc error: code = NotFound desc = an error occurred when try to find container \"52157db8852e6832e05fd37d42c29de03331165f4f0a441b9864b75f88310a32\": not found" Mar 19 11:53:02.839590 kubelet[2869]: I0319 11:53:02.839585 2869 scope.go:117] "RemoveContainer" containerID="8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce" Mar 19 11:53:02.839739 kubelet[2869]: E0319 11:53:02.839709 2869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\": not found" containerID="8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce" Mar 19 11:53:02.839739 kubelet[2869]: I0319 11:53:02.839718 2869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce"} err="failed to get container status \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\": not found" Mar 19 11:53:02.839739 kubelet[2869]: I0319 11:53:02.839725 2869 scope.go:117] "RemoveContainer" containerID="3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f" Mar 19 11:53:02.839875 containerd[1569]: time="2025-03-19T11:53:02.839657360Z" level=error msg="ContainerStatus for \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d446379e30af4808d339c016f32209960e54393fcf08f91b5bc5e8e4c87a1ce\": not found" Mar 19 11:53:02.839875 containerd[1569]: time="2025-03-19T11:53:02.839814213Z" level=error msg="ContainerStatus for \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\": not found" Mar 19 11:53:02.840014 containerd[1569]: time="2025-03-19T11:53:02.839964232Z" level=error msg="ContainerStatus for \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\": not found" Mar 19 11:53:02.840038 kubelet[2869]: E0319 11:53:02.839875 2869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\": not found" containerID="3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f" Mar 19 11:53:02.840038 kubelet[2869]: I0319 11:53:02.839886 2869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f"} err="failed to get container status \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3638b79bac6d7af9401ed8a11fbc70f59400d16cc56212fa9885caba3ea13a4f\": not found" Mar 19 11:53:02.840038 kubelet[2869]: I0319 11:53:02.839893 2869 scope.go:117] "RemoveContainer" containerID="e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a" Mar 19 11:53:02.840038 kubelet[2869]: E0319 11:53:02.840026 2869 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\": not found" containerID="e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a" Mar 19 11:53:02.840038 kubelet[2869]: I0319 11:53:02.840036 2869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a"} err="failed to get container status \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9d9cf2fdc0b1d6fcc4ce20cfbad139e901eb542964f5c4761aec55581d9879a\": not found" Mar 19 11:53:03.242411 systemd[1]: var-lib-kubelet-pods-4e42a6c2\x2d0292\x2d4a88\x2db689\x2de0644a14f755-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxchpw.mount: Deactivated successfully. Mar 19 11:53:03.242506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896-rootfs.mount: Deactivated successfully. Mar 19 11:53:03.242566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896-shm.mount: Deactivated successfully. Mar 19 11:53:03.242618 systemd[1]: var-lib-kubelet-pods-36de7565\x2dc8c1\x2d49aa\x2d80d1\x2da3b015d66662-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5cb6p.mount: Deactivated successfully. Mar 19 11:53:03.242675 systemd[1]: var-lib-kubelet-pods-36de7565\x2dc8c1\x2d49aa\x2d80d1\x2da3b015d66662-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:53:03.243060 systemd[1]: var-lib-kubelet-pods-36de7565\x2dc8c1\x2d49aa\x2d80d1\x2da3b015d66662-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:53:03.564827 kubelet[2869]: I0319 11:53:03.564520 2869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" path="/var/lib/kubelet/pods/36de7565-c8c1-49aa-80d1-a3b015d66662/volumes" Mar 19 11:53:03.565073 kubelet[2869]: I0319 11:53:03.565014 2869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e42a6c2-0292-4a88-b689-e0644a14f755" path="/var/lib/kubelet/pods/4e42a6c2-0292-4a88-b689-e0644a14f755/volumes" Mar 19 11:53:04.186423 sshd[4478]: Connection closed by 147.75.109.163 port 54940 Mar 19 11:53:04.187262 sshd-session[4475]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:04.196134 systemd[1]: sshd@24-139.178.70.109:22-147.75.109.163:54940.service: Deactivated successfully. Mar 19 11:53:04.197788 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:53:04.198946 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:53:04.203974 systemd[1]: Started sshd@25-139.178.70.109:22-147.75.109.163:44168.service - OpenSSH per-connection server daemon (147.75.109.163:44168). Mar 19 11:53:04.205720 systemd-logind[1539]: Removed session 25. Mar 19 11:53:04.251203 sshd[4637]: Accepted publickey for core from 147.75.109.163 port 44168 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:53:04.251868 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:04.255253 systemd-logind[1539]: New session 26 of user core. Mar 19 11:53:04.257978 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:53:04.626773 sshd[4640]: Connection closed by 147.75.109.163 port 44168 Mar 19 11:53:04.627108 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:04.635219 systemd[1]: sshd@25-139.178.70.109:22-147.75.109.163:44168.service: Deactivated successfully. Mar 19 11:53:04.637113 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:53:04.638398 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:53:04.643888 systemd[1]: Started sshd@26-139.178.70.109:22-147.75.109.163:44172.service - OpenSSH per-connection server daemon (147.75.109.163:44172). Mar 19 11:53:04.646022 systemd-logind[1539]: Removed session 26. Mar 19 11:53:04.670229 kubelet[2869]: I0319 11:53:04.670030 2869 topology_manager.go:215] "Topology Admit Handler" podUID="855d9ce5-bbe8-4f32-bdfa-83bef669991a" podNamespace="kube-system" podName="cilium-zfnpk" Mar 19 11:53:04.670229 kubelet[2869]: E0319 11:53:04.670071 2869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e42a6c2-0292-4a88-b689-e0644a14f755" containerName="cilium-operator" Mar 19 11:53:04.670229 kubelet[2869]: E0319 11:53:04.670078 2869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" containerName="mount-bpf-fs" Mar 19 11:53:04.670229 kubelet[2869]: E0319 11:53:04.670082 2869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" containerName="clean-cilium-state" Mar 19 11:53:04.670229 kubelet[2869]: E0319 11:53:04.670086 2869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" containerName="mount-cgroup" Mar 19 11:53:04.670229 kubelet[2869]: E0319 11:53:04.670090 2869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" containerName="apply-sysctl-overwrites" Mar 19 11:53:04.670229 kubelet[2869]: E0319 11:53:04.670094 2869 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" containerName="cilium-agent" Mar 19 11:53:04.676439 kubelet[2869]: I0319 11:53:04.671718 2869 memory_manager.go:354] "RemoveStaleState removing state" podUID="36de7565-c8c1-49aa-80d1-a3b015d66662" containerName="cilium-agent" Mar 19 11:53:04.676537 kubelet[2869]: I0319 11:53:04.676524 2869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e42a6c2-0292-4a88-b689-e0644a14f755" containerName="cilium-operator" Mar 19 11:53:04.680665 sshd[4649]: Accepted publickey for core from 147.75.109.163 port 44172 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:53:04.682335 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:04.690191 systemd-logind[1539]: New session 27 of user core. Mar 19 11:53:04.699852 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 11:53:04.700559 systemd[1]: Created slice kubepods-burstable-pod855d9ce5_bbe8_4f32_bdfa_83bef669991a.slice - libcontainer container kubepods-burstable-pod855d9ce5_bbe8_4f32_bdfa_83bef669991a.slice. Mar 19 11:53:04.705583 kubelet[2869]: I0319 11:53:04.705397 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-bpf-maps\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705583 kubelet[2869]: I0319 11:53:04.705416 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-cilium-cgroup\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705583 kubelet[2869]: I0319 11:53:04.705426 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-hostproc\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705583 kubelet[2869]: I0319 11:53:04.705437 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-lib-modules\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705583 kubelet[2869]: I0319 11:53:04.705445 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-cni-path\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705583 kubelet[2869]: I0319 11:53:04.705453 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/855d9ce5-bbe8-4f32-bdfa-83bef669991a-clustermesh-secrets\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705759 kubelet[2869]: I0319 11:53:04.705462 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/855d9ce5-bbe8-4f32-bdfa-83bef669991a-cilium-ipsec-secrets\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705759 kubelet[2869]: I0319 11:53:04.705473 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-host-proc-sys-kernel\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705759 kubelet[2869]: I0319 11:53:04.705482 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/855d9ce5-bbe8-4f32-bdfa-83bef669991a-hubble-tls\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705759 kubelet[2869]: I0319 11:53:04.705490 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-cilium-run\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705759 kubelet[2869]: I0319 11:53:04.705499 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-etc-cni-netd\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705759 kubelet[2869]: I0319 11:53:04.705507 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-xtables-lock\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705872 kubelet[2869]: I0319 11:53:04.705516 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/855d9ce5-bbe8-4f32-bdfa-83bef669991a-cilium-config-path\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705872 kubelet[2869]: I0319 11:53:04.705525 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/855d9ce5-bbe8-4f32-bdfa-83bef669991a-host-proc-sys-net\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.705872 kubelet[2869]: I0319 11:53:04.705534 2869 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j9nt\" (UniqueName: \"kubernetes.io/projected/855d9ce5-bbe8-4f32-bdfa-83bef669991a-kube-api-access-9j9nt\") pod \"cilium-zfnpk\" (UID: \"855d9ce5-bbe8-4f32-bdfa-83bef669991a\") " pod="kube-system/cilium-zfnpk" Mar 19 11:53:04.753890 sshd[4652]: Connection closed by 147.75.109.163 port 44172 Mar 19 11:53:04.754669 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:04.764078 systemd[1]: sshd@26-139.178.70.109:22-147.75.109.163:44172.service: Deactivated successfully. Mar 19 11:53:04.765259 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 11:53:04.766193 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Mar 19 11:53:04.767589 systemd[1]: Started sshd@27-139.178.70.109:22-147.75.109.163:44186.service - OpenSSH per-connection server daemon (147.75.109.163:44186). Mar 19 11:53:04.768261 systemd-logind[1539]: Removed session 27. Mar 19 11:53:04.806330 sshd[4658]: Accepted publickey for core from 147.75.109.163 port 44186 ssh2: RSA SHA256:9UeYaQBZzbN4uLPVjubPXLdVf4rYnI8jHkahT0DzHHM Mar 19 11:53:04.810852 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:53:04.828496 systemd-logind[1539]: New session 28 of user core. Mar 19 11:53:04.835825 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 19 11:53:05.005743 containerd[1569]: time="2025-03-19T11:53:05.005677646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfnpk,Uid:855d9ce5-bbe8-4f32-bdfa-83bef669991a,Namespace:kube-system,Attempt:0,}" Mar 19 11:53:05.018061 containerd[1569]: time="2025-03-19T11:53:05.017937577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:53:05.018061 containerd[1569]: time="2025-03-19T11:53:05.017980276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:53:05.018061 containerd[1569]: time="2025-03-19T11:53:05.017989845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:53:05.018342 containerd[1569]: time="2025-03-19T11:53:05.018315080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:53:05.037125 systemd[1]: Started cri-containerd-2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25.scope - libcontainer container 2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25. Mar 19 11:53:05.051346 containerd[1569]: time="2025-03-19T11:53:05.051315475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfnpk,Uid:855d9ce5-bbe8-4f32-bdfa-83bef669991a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\"" Mar 19 11:53:05.056709 containerd[1569]: time="2025-03-19T11:53:05.056632274Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:53:05.061540 containerd[1569]: time="2025-03-19T11:53:05.061511749Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d\"" Mar 19 11:53:05.062198 containerd[1569]: time="2025-03-19T11:53:05.061839999Z" level=info msg="StartContainer for \"0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d\"" Mar 19 11:53:05.078845 systemd[1]: Started cri-containerd-0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d.scope - libcontainer container 0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d. Mar 19 11:53:05.093906 containerd[1569]: time="2025-03-19T11:53:05.093879363Z" level=info msg="StartContainer for \"0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d\" returns successfully" Mar 19 11:53:05.119178 systemd[1]: cri-containerd-0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d.scope: Deactivated successfully. Mar 19 11:53:05.119383 systemd[1]: cri-containerd-0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d.scope: Consumed 12ms CPU time, 7.8M memory peak, 2.9M read from disk. Mar 19 11:53:05.138901 containerd[1569]: time="2025-03-19T11:53:05.138822400Z" level=info msg="shim disconnected" id=0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d namespace=k8s.io Mar 19 11:53:05.138901 containerd[1569]: time="2025-03-19T11:53:05.138854049Z" level=warning msg="cleaning up after shim disconnected" id=0bcfbbed63d89d718626664f397e2fd7861e032015758890e2fb1e3f9ecb4a1d namespace=k8s.io Mar 19 11:53:05.138901 containerd[1569]: time="2025-03-19T11:53:05.138859870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:05.826060 containerd[1569]: time="2025-03-19T11:53:05.826010703Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:53:05.832368 containerd[1569]: time="2025-03-19T11:53:05.832333797Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466\"" Mar 19 11:53:05.833214 containerd[1569]: time="2025-03-19T11:53:05.832667932Z" level=info msg="StartContainer for \"3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466\"" Mar 19 11:53:05.836677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717917749.mount: Deactivated successfully. Mar 19 11:53:05.857947 systemd[1]: Started cri-containerd-3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466.scope - libcontainer container 3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466. Mar 19 11:53:05.873062 containerd[1569]: time="2025-03-19T11:53:05.872950491Z" level=info msg="StartContainer for \"3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466\" returns successfully" Mar 19 11:53:05.883005 systemd[1]: cri-containerd-3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466.scope: Deactivated successfully. Mar 19 11:53:05.883375 systemd[1]: cri-containerd-3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466.scope: Consumed 11ms CPU time, 7.2M memory peak, 2M read from disk. Mar 19 11:53:05.895846 containerd[1569]: time="2025-03-19T11:53:05.895733353Z" level=info msg="shim disconnected" id=3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466 namespace=k8s.io Mar 19 11:53:05.895846 containerd[1569]: time="2025-03-19T11:53:05.895812668Z" level=warning msg="cleaning up after shim disconnected" id=3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466 namespace=k8s.io Mar 19 11:53:05.895846 containerd[1569]: time="2025-03-19T11:53:05.895819443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:06.811485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f0329c4b273738dcaa860b612cbb7c87fbe4ca376bdf8a9f010dcc2ca1c8466-rootfs.mount: Deactivated successfully. Mar 19 11:53:06.829531 containerd[1569]: time="2025-03-19T11:53:06.829499643Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:53:06.838393 containerd[1569]: time="2025-03-19T11:53:06.838335229Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0\"" Mar 19 11:53:06.839224 containerd[1569]: time="2025-03-19T11:53:06.839072004Z" level=info msg="StartContainer for \"707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0\"" Mar 19 11:53:06.856000 systemd[1]: run-containerd-runc-k8s.io-707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0-runc.0U9DDs.mount: Deactivated successfully. Mar 19 11:53:06.862845 systemd[1]: Started cri-containerd-707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0.scope - libcontainer container 707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0. Mar 19 11:53:06.878973 containerd[1569]: time="2025-03-19T11:53:06.878950768Z" level=info msg="StartContainer for \"707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0\" returns successfully" Mar 19 11:53:06.885800 systemd[1]: cri-containerd-707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0.scope: Deactivated successfully. Mar 19 11:53:06.896884 containerd[1569]: time="2025-03-19T11:53:06.896854066Z" level=info msg="shim disconnected" id=707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0 namespace=k8s.io Mar 19 11:53:06.897014 containerd[1569]: time="2025-03-19T11:53:06.896987784Z" level=warning msg="cleaning up after shim disconnected" id=707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0 namespace=k8s.io Mar 19 11:53:06.897114 containerd[1569]: time="2025-03-19T11:53:06.897090619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:06.903238 containerd[1569]: time="2025-03-19T11:53:06.903207742Z" level=warning msg="cleanup warnings time=\"2025-03-19T11:53:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 19 11:53:07.624249 kubelet[2869]: E0319 11:53:07.624175 2869 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:53:07.811714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-707e889bd17efc1621dfceac3d984ed5d8d6cb0164ef31aa22fab86e6e6c8ba0-rootfs.mount: Deactivated successfully. Mar 19 11:53:07.831485 containerd[1569]: time="2025-03-19T11:53:07.831369735Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:53:07.873700 containerd[1569]: time="2025-03-19T11:53:07.873664416Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4\"" Mar 19 11:53:07.874518 containerd[1569]: time="2025-03-19T11:53:07.873986223Z" level=info msg="StartContainer for \"e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4\"" Mar 19 11:53:07.896853 systemd[1]: Started cri-containerd-e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4.scope - libcontainer container e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4. Mar 19 11:53:07.912393 systemd[1]: cri-containerd-e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4.scope: Deactivated successfully. Mar 19 11:53:07.919115 containerd[1569]: time="2025-03-19T11:53:07.918897531Z" level=info msg="StartContainer for \"e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4\" returns successfully" Mar 19 11:53:07.933253 containerd[1569]: time="2025-03-19T11:53:07.933221209Z" level=info msg="shim disconnected" id=e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4 namespace=k8s.io Mar 19 11:53:07.933424 containerd[1569]: time="2025-03-19T11:53:07.933348446Z" level=warning msg="cleaning up after shim disconnected" id=e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4 namespace=k8s.io Mar 19 11:53:07.933424 containerd[1569]: time="2025-03-19T11:53:07.933356889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:53:08.241683 systemd[1]: Started sshd@28-139.178.70.109:22-8.146.204.142:37550.service - OpenSSH per-connection server daemon (8.146.204.142:37550). Mar 19 11:53:08.811661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e28755839015e24897ecb1c6fbe5784d7c858e7859fa31adf5f5dd1fea4119e4-rootfs.mount: Deactivated successfully. Mar 19 11:53:08.833891 containerd[1569]: time="2025-03-19T11:53:08.833700762Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:53:08.843478 containerd[1569]: time="2025-03-19T11:53:08.843304962Z" level=info msg="CreateContainer within sandbox \"2801c6a0a4fd121a16440a79188814bdd95f99979700d4fc6c9d40e340e95b25\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72\"" Mar 19 11:53:08.845154 containerd[1569]: time="2025-03-19T11:53:08.843892321Z" level=info msg="StartContainer for \"99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72\"" Mar 19 11:53:08.859023 systemd[1]: run-containerd-runc-k8s.io-99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72-runc.pTTRnI.mount: Deactivated successfully. Mar 19 11:53:08.865841 systemd[1]: Started cri-containerd-99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72.scope - libcontainer container 99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72. Mar 19 11:53:08.880741 containerd[1569]: time="2025-03-19T11:53:08.880722819Z" level=info msg="StartContainer for \"99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72\" returns successfully" Mar 19 11:53:09.354825 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 19 11:53:09.593839 kubelet[2869]: I0319 11:53:09.593791 2869 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:53:09Z","lastTransitionTime":"2025-03-19T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:53:09.845034 kubelet[2869]: I0319 11:53:09.844857 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zfnpk" podStartSLOduration=5.844844646 podStartE2EDuration="5.844844646s" podCreationTimestamp="2025-03-19 11:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:53:09.844493566 +0000 UTC m=+112.348999700" watchObservedRunningTime="2025-03-19 11:53:09.844844646 +0000 UTC m=+112.349350776" Mar 19 11:53:11.770573 systemd-networkd[1453]: lxc_health: Link UP Mar 19 11:53:11.788177 systemd-networkd[1453]: lxc_health: Gained carrier Mar 19 11:53:12.953882 systemd-networkd[1453]: lxc_health: Gained IPv6LL Mar 19 11:53:13.228325 systemd[1]: run-containerd-runc-k8s.io-99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72-runc.8Vi1Wk.mount: Deactivated successfully. Mar 19 11:53:17.370266 systemd[1]: run-containerd-runc-k8s.io-99e9cdbe5c2e9e859e2ac52fa0a3ea6aeed80ec3cbed5cfd1ff110840e422b72-runc.h16qsO.mount: Deactivated successfully. Mar 19 11:53:17.403859 sshd[4665]: Connection closed by 147.75.109.163 port 44186 Mar 19 11:53:17.404444 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Mar 19 11:53:17.406476 systemd[1]: sshd@27-139.178.70.109:22-147.75.109.163:44186.service: Deactivated successfully. Mar 19 11:53:17.407730 systemd[1]: session-28.scope: Deactivated successfully. Mar 19 11:53:17.408552 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Mar 19 11:53:17.409262 systemd-logind[1539]: Removed session 28. Mar 19 11:53:17.573818 containerd[1569]: time="2025-03-19T11:53:17.573792852Z" level=info msg="StopPodSandbox for \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\"" Mar 19 11:53:17.574815 containerd[1569]: time="2025-03-19T11:53:17.574133468Z" level=info msg="TearDown network for sandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" successfully" Mar 19 11:53:17.574815 containerd[1569]: time="2025-03-19T11:53:17.574146788Z" level=info msg="StopPodSandbox for \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" returns successfully" Mar 19 11:53:17.576546 containerd[1569]: time="2025-03-19T11:53:17.575401893Z" level=info msg="RemovePodSandbox for \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\"" Mar 19 11:53:17.576546 containerd[1569]: time="2025-03-19T11:53:17.575434174Z" level=info msg="Forcibly stopping sandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\"" Mar 19 11:53:17.576546 containerd[1569]: time="2025-03-19T11:53:17.575472050Z" level=info msg="TearDown network for sandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" successfully" Mar 19 11:53:17.578262 containerd[1569]: time="2025-03-19T11:53:17.578161255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:53:17.578262 containerd[1569]: time="2025-03-19T11:53:17.578206745Z" level=info msg="RemovePodSandbox \"cdbfef16906b5bf606a3a4a01fcc8d03e39e8594f61d24646cfef552d2960aea\" returns successfully" Mar 19 11:53:17.579097 containerd[1569]: time="2025-03-19T11:53:17.579080635Z" level=info msg="StopPodSandbox for \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\"" Mar 19 11:53:17.579193 containerd[1569]: time="2025-03-19T11:53:17.579182223Z" level=info msg="TearDown network for sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" successfully" Mar 19 11:53:17.579242 containerd[1569]: time="2025-03-19T11:53:17.579233699Z" level=info msg="StopPodSandbox for \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" returns successfully" Mar 19 11:53:17.579470 containerd[1569]: time="2025-03-19T11:53:17.579451689Z" level=info msg="RemovePodSandbox for \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\"" Mar 19 11:53:17.579507 containerd[1569]: time="2025-03-19T11:53:17.579472097Z" level=info msg="Forcibly stopping sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\"" Mar 19 11:53:17.579532 containerd[1569]: time="2025-03-19T11:53:17.579509918Z" level=info msg="TearDown network for sandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" successfully" Mar 19 11:53:17.580826 containerd[1569]: time="2025-03-19T11:53:17.580805541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:53:17.580911 containerd[1569]: time="2025-03-19T11:53:17.580836330Z" level=info msg="RemovePodSandbox \"d996b987392792ac78a7df867f11d71a7388568b3e7c44b43b692533f28c9896\" returns successfully"