Mar 20 17:55:07.763097 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 20 13:16:44 -00 2025 Mar 20 17:55:07.765799 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=30d38910dcb9abcb2ae1fb8c4b62196472dfae1a70f494441b86ff0de2ee88c9 Mar 20 17:55:07.765809 kernel: Disabled fast string operations Mar 20 17:55:07.765813 kernel: BIOS-provided physical RAM map: Mar 20 17:55:07.765817 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Mar 20 17:55:07.765821 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Mar 20 17:55:07.765829 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Mar 20 17:55:07.765834 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Mar 20 17:55:07.765838 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Mar 20 17:55:07.765842 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Mar 20 17:55:07.765847 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Mar 20 17:55:07.765851 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Mar 20 17:55:07.765855 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Mar 20 17:55:07.765859 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Mar 20 17:55:07.765866 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Mar 20 17:55:07.765871 kernel: NX (Execute Disable) protection: active Mar 20 17:55:07.765876 kernel: APIC: Static calls initialized Mar 20 17:55:07.765880 kernel: SMBIOS 2.7 present. Mar 20 17:55:07.765885 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Mar 20 17:55:07.765890 kernel: vmware: hypercall mode: 0x00 Mar 20 17:55:07.765895 kernel: Hypervisor detected: VMware Mar 20 17:55:07.765900 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Mar 20 17:55:07.765906 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Mar 20 17:55:07.765911 kernel: vmware: using clock offset of 3209197876 ns Mar 20 17:55:07.765915 kernel: tsc: Detected 3408.000 MHz processor Mar 20 17:55:07.765921 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 20 17:55:07.765927 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 20 17:55:07.765932 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Mar 20 17:55:07.765937 kernel: total RAM covered: 3072M Mar 20 17:55:07.765942 kernel: Found optimal setting for mtrr clean up Mar 20 17:55:07.765948 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Mar 20 17:55:07.765953 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Mar 20 17:55:07.765959 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 20 17:55:07.765964 kernel: Using GB pages for direct mapping Mar 20 17:55:07.765969 kernel: ACPI: Early table checksum verification disabled Mar 20 17:55:07.765974 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Mar 20 17:55:07.765979 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Mar 20 17:55:07.765984 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Mar 20 17:55:07.765989 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Mar 20 17:55:07.765994 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Mar 20 17:55:07.766002 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Mar 20 17:55:07.766007 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Mar 20 17:55:07.766013 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Mar 20 17:55:07.766018 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Mar 20 17:55:07.766023 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Mar 20 17:55:07.766029 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Mar 20 17:55:07.766035 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Mar 20 17:55:07.766041 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Mar 20 17:55:07.766046 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Mar 20 17:55:07.766051 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Mar 20 17:55:07.766056 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Mar 20 17:55:07.766061 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Mar 20 17:55:07.766066 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Mar 20 17:55:07.766072 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Mar 20 17:55:07.766077 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Mar 20 17:55:07.766083 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Mar 20 17:55:07.766088 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Mar 20 17:55:07.766093 kernel: system APIC only can use physical flat Mar 20 17:55:07.766098 kernel: APIC: Switched APIC routing to: physical flat Mar 20 17:55:07.766103 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 20 17:55:07.766109 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 20 17:55:07.766114 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 20 17:55:07.766119 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 20 17:55:07.766124 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 20 17:55:07.766129 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 20 17:55:07.766135 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 20 17:55:07.766140 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 20 17:55:07.766145 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Mar 20 17:55:07.766150 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Mar 20 17:55:07.766155 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Mar 20 17:55:07.766161 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Mar 20 17:55:07.766166 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Mar 20 17:55:07.766170 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Mar 20 17:55:07.766176 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Mar 20 17:55:07.766181 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Mar 20 17:55:07.766187 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Mar 20 17:55:07.766192 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Mar 20 17:55:07.766197 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Mar 20 17:55:07.766202 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Mar 20 17:55:07.766207 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Mar 20 17:55:07.766212 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Mar 20 17:55:07.766217 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Mar 20 17:55:07.766222 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Mar 20 17:55:07.766227 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Mar 20 17:55:07.766232 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Mar 20 17:55:07.766238 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Mar 20 17:55:07.766243 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Mar 20 17:55:07.766249 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Mar 20 17:55:07.766254 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Mar 20 17:55:07.766258 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Mar 20 17:55:07.766263 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Mar 20 17:55:07.766268 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Mar 20 17:55:07.766274 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Mar 20 17:55:07.766279 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Mar 20 17:55:07.766284 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Mar 20 17:55:07.766289 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Mar 20 17:55:07.766295 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Mar 20 17:55:07.766300 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Mar 20 17:55:07.766305 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Mar 20 17:55:07.766310 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Mar 20 17:55:07.766315 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Mar 20 17:55:07.766320 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Mar 20 17:55:07.766325 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Mar 20 17:55:07.766330 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Mar 20 17:55:07.766335 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Mar 20 17:55:07.766340 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Mar 20 17:55:07.766346 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Mar 20 17:55:07.766351 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Mar 20 17:55:07.766356 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Mar 20 17:55:07.766362 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Mar 20 17:55:07.766367 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Mar 20 17:55:07.766371 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Mar 20 17:55:07.766377 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Mar 20 17:55:07.766381 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Mar 20 17:55:07.766387 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Mar 20 17:55:07.766391 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Mar 20 17:55:07.766398 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Mar 20 17:55:07.766403 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Mar 20 17:55:07.766411 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Mar 20 17:55:07.766418 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Mar 20 17:55:07.766423 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Mar 20 17:55:07.766429 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Mar 20 17:55:07.766434 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Mar 20 17:55:07.766439 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Mar 20 17:55:07.766445 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Mar 20 17:55:07.766451 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Mar 20 17:55:07.766457 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Mar 20 17:55:07.766462 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Mar 20 17:55:07.766467 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Mar 20 17:55:07.766473 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Mar 20 17:55:07.766478 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Mar 20 17:55:07.766484 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Mar 20 17:55:07.766489 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Mar 20 17:55:07.766494 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Mar 20 17:55:07.766500 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Mar 20 17:55:07.766506 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Mar 20 17:55:07.766511 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Mar 20 17:55:07.766517 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Mar 20 17:55:07.766522 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Mar 20 17:55:07.766528 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Mar 20 17:55:07.766533 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Mar 20 17:55:07.766539 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Mar 20 17:55:07.766544 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Mar 20 17:55:07.766549 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Mar 20 17:55:07.766555 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Mar 20 17:55:07.766560 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Mar 20 17:55:07.766566 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Mar 20 17:55:07.766572 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Mar 20 17:55:07.766577 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Mar 20 17:55:07.766582 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Mar 20 17:55:07.766587 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Mar 20 17:55:07.766593 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Mar 20 17:55:07.766598 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Mar 20 17:55:07.766604 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Mar 20 17:55:07.766609 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Mar 20 17:55:07.766614 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Mar 20 17:55:07.766621 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Mar 20 17:55:07.766626 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Mar 20 17:55:07.766632 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Mar 20 17:55:07.766637 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Mar 20 17:55:07.766642 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Mar 20 17:55:07.766648 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Mar 20 17:55:07.766653 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Mar 20 17:55:07.766658 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Mar 20 17:55:07.766663 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Mar 20 17:55:07.766669 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Mar 20 17:55:07.766675 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Mar 20 17:55:07.766680 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Mar 20 17:55:07.766686 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Mar 20 17:55:07.766691 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Mar 20 17:55:07.766697 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Mar 20 17:55:07.766702 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Mar 20 17:55:07.766707 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Mar 20 17:55:07.766713 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Mar 20 17:55:07.766718 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Mar 20 17:55:07.766724 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Mar 20 17:55:07.766730 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Mar 20 17:55:07.766736 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Mar 20 17:55:07.766741 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Mar 20 17:55:07.768075 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Mar 20 17:55:07.768085 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Mar 20 17:55:07.768091 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Mar 20 17:55:07.768096 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Mar 20 17:55:07.768102 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Mar 20 17:55:07.768107 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Mar 20 17:55:07.768112 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Mar 20 17:55:07.768118 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Mar 20 17:55:07.768126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 20 17:55:07.768132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 20 17:55:07.768138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Mar 20 17:55:07.768143 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Mar 20 17:55:07.768149 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Mar 20 17:55:07.768155 kernel: Zone ranges: Mar 20 17:55:07.768160 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 20 17:55:07.768166 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Mar 20 17:55:07.768171 kernel: Normal empty Mar 20 17:55:07.768178 kernel: Movable zone start for each node Mar 20 17:55:07.768184 kernel: Early memory node ranges Mar 20 17:55:07.768189 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Mar 20 17:55:07.768195 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Mar 20 17:55:07.768200 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Mar 20 17:55:07.768206 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Mar 20 17:55:07.768212 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 20 17:55:07.768217 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Mar 20 17:55:07.768223 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Mar 20 17:55:07.768230 kernel: ACPI: PM-Timer IO Port: 0x1008 Mar 20 17:55:07.768235 kernel: system APIC only can use physical flat Mar 20 17:55:07.768241 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Mar 20 17:55:07.768246 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Mar 20 17:55:07.768252 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Mar 20 17:55:07.768257 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Mar 20 17:55:07.768263 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Mar 20 17:55:07.768268 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Mar 20 17:55:07.768274 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Mar 20 17:55:07.768279 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Mar 20 17:55:07.768286 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Mar 20 17:55:07.768292 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Mar 20 17:55:07.768297 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Mar 20 17:55:07.768303 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Mar 20 17:55:07.768308 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Mar 20 17:55:07.768314 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Mar 20 17:55:07.768319 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Mar 20 17:55:07.768324 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Mar 20 17:55:07.768330 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Mar 20 17:55:07.768336 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Mar 20 17:55:07.768342 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Mar 20 17:55:07.768348 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Mar 20 17:55:07.768353 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Mar 20 17:55:07.768358 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Mar 20 17:55:07.768364 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Mar 20 17:55:07.768369 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Mar 20 17:55:07.768375 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Mar 20 17:55:07.768380 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Mar 20 17:55:07.768385 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Mar 20 17:55:07.768391 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Mar 20 17:55:07.768397 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Mar 20 17:55:07.768403 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Mar 20 17:55:07.768408 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Mar 20 17:55:07.768414 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Mar 20 17:55:07.768419 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Mar 20 17:55:07.768425 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Mar 20 17:55:07.768430 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Mar 20 17:55:07.768436 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Mar 20 17:55:07.768441 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Mar 20 17:55:07.768448 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Mar 20 17:55:07.768454 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Mar 20 17:55:07.768459 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Mar 20 17:55:07.768465 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Mar 20 17:55:07.768470 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Mar 20 17:55:07.768476 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Mar 20 17:55:07.768481 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Mar 20 17:55:07.768486 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Mar 20 17:55:07.768492 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Mar 20 17:55:07.768497 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Mar 20 17:55:07.768504 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Mar 20 17:55:07.768509 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Mar 20 17:55:07.768515 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Mar 20 17:55:07.768520 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Mar 20 17:55:07.768526 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Mar 20 17:55:07.768531 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Mar 20 17:55:07.768537 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Mar 20 17:55:07.768542 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Mar 20 17:55:07.768548 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Mar 20 17:55:07.768553 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Mar 20 17:55:07.768559 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Mar 20 17:55:07.768568 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Mar 20 17:55:07.768574 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Mar 20 17:55:07.768579 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Mar 20 17:55:07.768585 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Mar 20 17:55:07.768590 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Mar 20 17:55:07.768595 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Mar 20 17:55:07.768601 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Mar 20 17:55:07.768606 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Mar 20 17:55:07.768612 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Mar 20 17:55:07.768619 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Mar 20 17:55:07.768624 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Mar 20 17:55:07.768629 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Mar 20 17:55:07.768635 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Mar 20 17:55:07.768640 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Mar 20 17:55:07.768646 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Mar 20 17:55:07.768651 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Mar 20 17:55:07.768656 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Mar 20 17:55:07.768662 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Mar 20 17:55:07.768668 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Mar 20 17:55:07.768674 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Mar 20 17:55:07.768679 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Mar 20 17:55:07.768684 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Mar 20 17:55:07.768690 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Mar 20 17:55:07.768695 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Mar 20 17:55:07.768701 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Mar 20 17:55:07.768706 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Mar 20 17:55:07.768711 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Mar 20 17:55:07.768717 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Mar 20 17:55:07.768723 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Mar 20 17:55:07.768729 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Mar 20 17:55:07.768734 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Mar 20 17:55:07.768740 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Mar 20 17:55:07.769133 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Mar 20 17:55:07.769143 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Mar 20 17:55:07.769149 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Mar 20 17:55:07.769154 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Mar 20 17:55:07.769160 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Mar 20 17:55:07.769165 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Mar 20 17:55:07.769173 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Mar 20 17:55:07.769178 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Mar 20 17:55:07.769184 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Mar 20 17:55:07.769189 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Mar 20 17:55:07.769195 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Mar 20 17:55:07.769200 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Mar 20 17:55:07.769206 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Mar 20 17:55:07.769211 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Mar 20 17:55:07.769216 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Mar 20 17:55:07.769223 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Mar 20 17:55:07.769228 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Mar 20 17:55:07.769234 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Mar 20 17:55:07.769239 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Mar 20 17:55:07.769245 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Mar 20 17:55:07.769256 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Mar 20 17:55:07.769271 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Mar 20 17:55:07.769276 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Mar 20 17:55:07.769286 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Mar 20 17:55:07.769293 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Mar 20 17:55:07.769300 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Mar 20 17:55:07.769306 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Mar 20 17:55:07.769315 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Mar 20 17:55:07.769322 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Mar 20 17:55:07.769327 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Mar 20 17:55:07.769333 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Mar 20 17:55:07.769338 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Mar 20 17:55:07.769344 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Mar 20 17:55:07.769349 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Mar 20 17:55:07.769354 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Mar 20 17:55:07.769362 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Mar 20 17:55:07.769367 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Mar 20 17:55:07.769373 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Mar 20 17:55:07.769378 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Mar 20 17:55:07.769384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Mar 20 17:55:07.769390 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 20 17:55:07.769395 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Mar 20 17:55:07.769401 kernel: TSC deadline timer available Mar 20 17:55:07.769406 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Mar 20 17:55:07.769412 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Mar 20 17:55:07.769419 kernel: Booting paravirtualized kernel on VMware hypervisor Mar 20 17:55:07.769424 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 20 17:55:07.769430 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Mar 20 17:55:07.769441 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Mar 20 17:55:07.769446 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Mar 20 17:55:07.769452 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Mar 20 17:55:07.769457 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Mar 20 17:55:07.769463 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Mar 20 17:55:07.769469 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Mar 20 17:55:07.769475 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Mar 20 17:55:07.769488 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Mar 20 17:55:07.769495 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Mar 20 17:55:07.769501 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Mar 20 17:55:07.769506 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Mar 20 17:55:07.769512 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Mar 20 17:55:07.769518 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Mar 20 17:55:07.769524 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Mar 20 17:55:07.769530 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Mar 20 17:55:07.769536 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Mar 20 17:55:07.769542 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Mar 20 17:55:07.769547 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Mar 20 17:55:07.769554 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=30d38910dcb9abcb2ae1fb8c4b62196472dfae1a70f494441b86ff0de2ee88c9 Mar 20 17:55:07.769561 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 17:55:07.769567 kernel: random: crng init done Mar 20 17:55:07.769572 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Mar 20 17:55:07.769580 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Mar 20 17:55:07.769585 kernel: printk: log_buf_len min size: 262144 bytes Mar 20 17:55:07.769591 kernel: printk: log_buf_len: 1048576 bytes Mar 20 17:55:07.769597 kernel: printk: early log buf free: 239648(91%) Mar 20 17:55:07.769603 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 17:55:07.769609 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 20 17:55:07.769615 kernel: Fallback order for Node 0: 0 Mar 20 17:55:07.769621 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Mar 20 17:55:07.769626 kernel: Policy zone: DMA32 Mar 20 17:55:07.769633 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 17:55:07.769640 kernel: Memory: 1932280K/2096628K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 164088K reserved, 0K cma-reserved) Mar 20 17:55:07.769647 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Mar 20 17:55:07.769652 kernel: ftrace: allocating 37985 entries in 149 pages Mar 20 17:55:07.769658 kernel: ftrace: allocated 149 pages with 4 groups Mar 20 17:55:07.769665 kernel: Dynamic Preempt: voluntary Mar 20 17:55:07.769671 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 17:55:07.769678 kernel: rcu: RCU event tracing is enabled. Mar 20 17:55:07.769685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Mar 20 17:55:07.769690 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 17:55:07.769697 kernel: Rude variant of Tasks RCU enabled. Mar 20 17:55:07.769703 kernel: Tracing variant of Tasks RCU enabled. Mar 20 17:55:07.769708 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 17:55:07.769714 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Mar 20 17:55:07.769720 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Mar 20 17:55:07.769727 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Mar 20 17:55:07.769733 kernel: Console: colour VGA+ 80x25 Mar 20 17:55:07.769739 kernel: printk: console [tty0] enabled Mar 20 17:55:07.769753 kernel: printk: console [ttyS0] enabled Mar 20 17:55:07.769760 kernel: ACPI: Core revision 20230628 Mar 20 17:55:07.769766 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Mar 20 17:55:07.769772 kernel: APIC: Switch to symmetric I/O mode setup Mar 20 17:55:07.769778 kernel: x2apic enabled Mar 20 17:55:07.769784 kernel: APIC: Switched APIC routing to: physical x2apic Mar 20 17:55:07.769792 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 20 17:55:07.769798 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Mar 20 17:55:07.769804 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Mar 20 17:55:07.769810 kernel: Disabled fast string operations Mar 20 17:55:07.769818 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 20 17:55:07.769824 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 20 17:55:07.769830 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 20 17:55:07.769836 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 20 17:55:07.769842 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 20 17:55:07.769849 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 20 17:55:07.769855 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 20 17:55:07.769861 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Mar 20 17:55:07.769867 kernel: RETBleed: Mitigation: Enhanced IBRS Mar 20 17:55:07.769873 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 20 17:55:07.769879 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 20 17:55:07.769885 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 20 17:55:07.769891 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 20 17:55:07.769897 kernel: GDS: Unknown: Dependent on hypervisor status Mar 20 17:55:07.769905 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 20 17:55:07.769911 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 20 17:55:07.769917 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 20 17:55:07.769922 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 20 17:55:07.769928 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 20 17:55:07.769935 kernel: Freeing SMP alternatives memory: 32K Mar 20 17:55:07.769941 kernel: pid_max: default: 131072 minimum: 1024 Mar 20 17:55:07.769947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 17:55:07.769953 kernel: landlock: Up and running. Mar 20 17:55:07.769961 kernel: SELinux: Initializing. Mar 20 17:55:07.769967 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 20 17:55:07.769973 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 20 17:55:07.769979 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Mar 20 17:55:07.769985 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Mar 20 17:55:07.769991 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Mar 20 17:55:07.769997 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Mar 20 17:55:07.770003 kernel: Performance Events: Skylake events, core PMU driver. Mar 20 17:55:07.770009 kernel: core: CPUID marked event: 'cpu cycles' unavailable Mar 20 17:55:07.770017 kernel: core: CPUID marked event: 'instructions' unavailable Mar 20 17:55:07.770023 kernel: core: CPUID marked event: 'bus cycles' unavailable Mar 20 17:55:07.770029 kernel: core: CPUID marked event: 'cache references' unavailable Mar 20 17:55:07.770034 kernel: core: CPUID marked event: 'cache misses' unavailable Mar 20 17:55:07.770040 kernel: core: CPUID marked event: 'branch instructions' unavailable Mar 20 17:55:07.770046 kernel: core: CPUID marked event: 'branch misses' unavailable Mar 20 17:55:07.770052 kernel: ... version: 1 Mar 20 17:55:07.770058 kernel: ... bit width: 48 Mar 20 17:55:07.770065 kernel: ... generic registers: 4 Mar 20 17:55:07.770071 kernel: ... value mask: 0000ffffffffffff Mar 20 17:55:07.770077 kernel: ... max period: 000000007fffffff Mar 20 17:55:07.770083 kernel: ... fixed-purpose events: 0 Mar 20 17:55:07.770089 kernel: ... event mask: 000000000000000f Mar 20 17:55:07.770094 kernel: signal: max sigframe size: 1776 Mar 20 17:55:07.770100 kernel: rcu: Hierarchical SRCU implementation. Mar 20 17:55:07.770106 kernel: rcu: Max phase no-delay instances is 400. Mar 20 17:55:07.770112 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 20 17:55:07.770120 kernel: smp: Bringing up secondary CPUs ... Mar 20 17:55:07.770125 kernel: smpboot: x86: Booting SMP configuration: Mar 20 17:55:07.770131 kernel: .... node #0, CPUs: #1 Mar 20 17:55:07.770137 kernel: Disabled fast string operations Mar 20 17:55:07.770143 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Mar 20 17:55:07.770149 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 20 17:55:07.770154 kernel: smp: Brought up 1 node, 2 CPUs Mar 20 17:55:07.770160 kernel: smpboot: Max logical packages: 128 Mar 20 17:55:07.770166 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Mar 20 17:55:07.770172 kernel: devtmpfs: initialized Mar 20 17:55:07.770179 kernel: x86/mm: Memory block size: 128MB Mar 20 17:55:07.770185 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Mar 20 17:55:07.770191 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 17:55:07.770197 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Mar 20 17:55:07.770203 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 17:55:07.770209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 17:55:07.770215 kernel: audit: initializing netlink subsys (disabled) Mar 20 17:55:07.770221 kernel: audit: type=2000 audit(1742493306.067:1): state=initialized audit_enabled=0 res=1 Mar 20 17:55:07.770228 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 17:55:07.770235 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 20 17:55:07.770241 kernel: cpuidle: using governor menu Mar 20 17:55:07.770247 kernel: Simple Boot Flag at 0x36 set to 0x80 Mar 20 17:55:07.770253 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 17:55:07.770259 kernel: dca service started, version 1.12.1 Mar 20 17:55:07.770265 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Mar 20 17:55:07.770270 kernel: PCI: Using configuration type 1 for base access Mar 20 17:55:07.770276 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 20 17:55:07.770282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 17:55:07.770290 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 17:55:07.770295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 17:55:07.770301 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 17:55:07.770307 kernel: ACPI: Added _OSI(Module Device) Mar 20 17:55:07.770313 kernel: ACPI: Added _OSI(Processor Device) Mar 20 17:55:07.770319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 17:55:07.770325 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 17:55:07.770330 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 17:55:07.770336 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Mar 20 17:55:07.770343 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 20 17:55:07.770349 kernel: ACPI: Interpreter enabled Mar 20 17:55:07.770355 kernel: ACPI: PM: (supports S0 S1 S5) Mar 20 17:55:07.770361 kernel: ACPI: Using IOAPIC for interrupt routing Mar 20 17:55:07.770367 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 20 17:55:07.770372 kernel: PCI: Using E820 reservations for host bridge windows Mar 20 17:55:07.770378 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Mar 20 17:55:07.770384 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Mar 20 17:55:07.770478 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 17:55:07.770536 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Mar 20 17:55:07.770586 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Mar 20 17:55:07.770595 kernel: PCI host bridge to bus 0000:00 Mar 20 17:55:07.770645 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 20 17:55:07.770691 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Mar 20 17:55:07.770735 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 20 17:55:07.770806 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 20 17:55:07.770852 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Mar 20 17:55:07.770896 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Mar 20 17:55:07.770956 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Mar 20 17:55:07.771016 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Mar 20 17:55:07.771073 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Mar 20 17:55:07.771131 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Mar 20 17:55:07.771182 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Mar 20 17:55:07.771233 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 20 17:55:07.771283 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 20 17:55:07.771332 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 20 17:55:07.771382 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 20 17:55:07.771438 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Mar 20 17:55:07.771491 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Mar 20 17:55:07.771541 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Mar 20 17:55:07.771598 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Mar 20 17:55:07.771650 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Mar 20 17:55:07.771701 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Mar 20 17:55:07.771764 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Mar 20 17:55:07.771820 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Mar 20 17:55:07.771871 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Mar 20 17:55:07.771921 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Mar 20 17:55:07.771971 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Mar 20 17:55:07.772021 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 20 17:55:07.772075 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Mar 20 17:55:07.772131 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772186 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772241 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772292 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772347 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772397 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772457 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772512 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772569 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772621 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772680 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772731 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772809 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772867 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.772922 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.772975 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773029 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773081 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773136 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773191 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773245 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773296 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773351 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773404 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773458 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773512 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773568 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773619 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773676 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773728 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773800 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773857 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.773912 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.773964 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774019 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774071 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774127 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774181 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774237 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774289 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774343 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774396 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774452 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774503 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774562 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774614 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.774668 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.774720 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775188 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775246 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775306 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775359 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775414 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775472 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775527 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775579 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775637 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775688 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775742 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775826 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775887 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.775940 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.775998 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Mar 20 17:55:07.776050 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.776104 kernel: pci_bus 0000:01: extended config space not accessible Mar 20 17:55:07.776156 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 20 17:55:07.776209 kernel: pci_bus 0000:02: extended config space not accessible Mar 20 17:55:07.776218 kernel: acpiphp: Slot [32] registered Mar 20 17:55:07.776225 kernel: acpiphp: Slot [33] registered Mar 20 17:55:07.776232 kernel: acpiphp: Slot [34] registered Mar 20 17:55:07.776238 kernel: acpiphp: Slot [35] registered Mar 20 17:55:07.776244 kernel: acpiphp: Slot [36] registered Mar 20 17:55:07.776250 kernel: acpiphp: Slot [37] registered Mar 20 17:55:07.776256 kernel: acpiphp: Slot [38] registered Mar 20 17:55:07.776262 kernel: acpiphp: Slot [39] registered Mar 20 17:55:07.776268 kernel: acpiphp: Slot [40] registered Mar 20 17:55:07.776274 kernel: acpiphp: Slot [41] registered Mar 20 17:55:07.776280 kernel: acpiphp: Slot [42] registered Mar 20 17:55:07.776285 kernel: acpiphp: Slot [43] registered Mar 20 17:55:07.776292 kernel: acpiphp: Slot [44] registered Mar 20 17:55:07.776298 kernel: acpiphp: Slot [45] registered Mar 20 17:55:07.776304 kernel: acpiphp: Slot [46] registered Mar 20 17:55:07.776310 kernel: acpiphp: Slot [47] registered Mar 20 17:55:07.776316 kernel: acpiphp: Slot [48] registered Mar 20 17:55:07.776322 kernel: acpiphp: Slot [49] registered Mar 20 17:55:07.776328 kernel: acpiphp: Slot [50] registered Mar 20 17:55:07.776333 kernel: acpiphp: Slot [51] registered Mar 20 17:55:07.776339 kernel: acpiphp: Slot [52] registered Mar 20 17:55:07.776346 kernel: acpiphp: Slot [53] registered Mar 20 17:55:07.776352 kernel: acpiphp: Slot [54] registered Mar 20 17:55:07.776358 kernel: acpiphp: Slot [55] registered Mar 20 17:55:07.776364 kernel: acpiphp: Slot [56] registered Mar 20 17:55:07.776369 kernel: acpiphp: Slot [57] registered Mar 20 17:55:07.776375 kernel: acpiphp: Slot [58] registered Mar 20 17:55:07.776381 kernel: acpiphp: Slot [59] registered Mar 20 17:55:07.776387 kernel: acpiphp: Slot [60] registered Mar 20 17:55:07.776393 kernel: acpiphp: Slot [61] registered Mar 20 17:55:07.776399 kernel: acpiphp: Slot [62] registered Mar 20 17:55:07.776406 kernel: acpiphp: Slot [63] registered Mar 20 17:55:07.776457 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Mar 20 17:55:07.776508 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Mar 20 17:55:07.776559 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Mar 20 17:55:07.776608 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 20 17:55:07.776658 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Mar 20 17:55:07.776709 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Mar 20 17:55:07.776805 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Mar 20 17:55:07.776857 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Mar 20 17:55:07.776907 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Mar 20 17:55:07.776964 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Mar 20 17:55:07.777017 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Mar 20 17:55:07.777068 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Mar 20 17:55:07.777120 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Mar 20 17:55:07.777173 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Mar 20 17:55:07.777228 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Mar 20 17:55:07.777281 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Mar 20 17:55:07.777332 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Mar 20 17:55:07.777383 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Mar 20 17:55:07.777435 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Mar 20 17:55:07.777486 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Mar 20 17:55:07.777536 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Mar 20 17:55:07.777589 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Mar 20 17:55:07.777642 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Mar 20 17:55:07.777693 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Mar 20 17:55:07.777750 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Mar 20 17:55:07.777802 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Mar 20 17:55:07.777855 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Mar 20 17:55:07.777906 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Mar 20 17:55:07.777957 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Mar 20 17:55:07.778012 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Mar 20 17:55:07.778063 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Mar 20 17:55:07.778113 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 20 17:55:07.778166 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Mar 20 17:55:07.778220 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Mar 20 17:55:07.778270 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Mar 20 17:55:07.778323 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Mar 20 17:55:07.778374 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Mar 20 17:55:07.778424 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Mar 20 17:55:07.778481 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Mar 20 17:55:07.778533 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Mar 20 17:55:07.778584 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Mar 20 17:55:07.778644 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Mar 20 17:55:07.778697 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Mar 20 17:55:07.778757 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Mar 20 17:55:07.778811 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Mar 20 17:55:07.778863 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Mar 20 17:55:07.778929 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Mar 20 17:55:07.778984 kernel: pci 0000:0b:00.0: supports D1 D2 Mar 20 17:55:07.780828 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 20 17:55:07.780891 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Mar 20 17:55:07.780949 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Mar 20 17:55:07.782733 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Mar 20 17:55:07.782812 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Mar 20 17:55:07.782872 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Mar 20 17:55:07.782925 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Mar 20 17:55:07.782977 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Mar 20 17:55:07.783032 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Mar 20 17:55:07.783087 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Mar 20 17:55:07.783138 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Mar 20 17:55:07.783189 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Mar 20 17:55:07.783240 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Mar 20 17:55:07.783292 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Mar 20 17:55:07.783344 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Mar 20 17:55:07.783395 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 20 17:55:07.783452 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Mar 20 17:55:07.783505 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Mar 20 17:55:07.783555 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 20 17:55:07.783609 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Mar 20 17:55:07.783661 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Mar 20 17:55:07.783711 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Mar 20 17:55:07.784707 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Mar 20 17:55:07.784779 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Mar 20 17:55:07.784839 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Mar 20 17:55:07.784896 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Mar 20 17:55:07.784949 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Mar 20 17:55:07.785001 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 20 17:55:07.785055 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Mar 20 17:55:07.785106 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Mar 20 17:55:07.785156 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Mar 20 17:55:07.785207 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 20 17:55:07.785265 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Mar 20 17:55:07.785317 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Mar 20 17:55:07.785369 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Mar 20 17:55:07.785420 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Mar 20 17:55:07.785476 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Mar 20 17:55:07.785527 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Mar 20 17:55:07.785577 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Mar 20 17:55:07.785644 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Mar 20 17:55:07.785720 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Mar 20 17:55:07.785785 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Mar 20 17:55:07.785836 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 20 17:55:07.785890 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Mar 20 17:55:07.785942 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Mar 20 17:55:07.785993 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 20 17:55:07.786046 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Mar 20 17:55:07.786100 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Mar 20 17:55:07.786153 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Mar 20 17:55:07.786206 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Mar 20 17:55:07.786258 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Mar 20 17:55:07.786308 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Mar 20 17:55:07.786363 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Mar 20 17:55:07.786415 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Mar 20 17:55:07.786466 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 20 17:55:07.786522 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Mar 20 17:55:07.786575 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Mar 20 17:55:07.786626 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Mar 20 17:55:07.786676 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Mar 20 17:55:07.786729 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Mar 20 17:55:07.786877 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Mar 20 17:55:07.786929 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Mar 20 17:55:07.786981 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Mar 20 17:55:07.787040 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Mar 20 17:55:07.787091 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Mar 20 17:55:07.787142 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Mar 20 17:55:07.787197 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Mar 20 17:55:07.787247 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Mar 20 17:55:07.787297 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 20 17:55:07.787350 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Mar 20 17:55:07.787401 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Mar 20 17:55:07.787465 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Mar 20 17:55:07.787520 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Mar 20 17:55:07.787572 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Mar 20 17:55:07.787624 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Mar 20 17:55:07.787677 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Mar 20 17:55:07.787728 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Mar 20 17:55:07.789162 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Mar 20 17:55:07.789222 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Mar 20 17:55:07.789277 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Mar 20 17:55:07.789329 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 20 17:55:07.789338 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Mar 20 17:55:07.789344 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Mar 20 17:55:07.789350 kernel: ACPI: PCI: Interrupt link LNKB disabled Mar 20 17:55:07.789356 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 20 17:55:07.789362 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Mar 20 17:55:07.789368 kernel: iommu: Default domain type: Translated Mar 20 17:55:07.789376 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 20 17:55:07.789382 kernel: PCI: Using ACPI for IRQ routing Mar 20 17:55:07.789388 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 20 17:55:07.789394 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Mar 20 17:55:07.789400 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Mar 20 17:55:07.789463 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Mar 20 17:55:07.789518 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Mar 20 17:55:07.789569 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 20 17:55:07.789578 kernel: vgaarb: loaded Mar 20 17:55:07.789586 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Mar 20 17:55:07.789592 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Mar 20 17:55:07.789598 kernel: clocksource: Switched to clocksource tsc-early Mar 20 17:55:07.789604 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 17:55:07.789610 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 17:55:07.789616 kernel: pnp: PnP ACPI init Mar 20 17:55:07.789670 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Mar 20 17:55:07.789718 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Mar 20 17:55:07.789840 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Mar 20 17:55:07.789894 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Mar 20 17:55:07.789943 kernel: pnp 00:06: [dma 2] Mar 20 17:55:07.789998 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Mar 20 17:55:07.790045 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Mar 20 17:55:07.790092 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Mar 20 17:55:07.790100 kernel: pnp: PnP ACPI: found 8 devices Mar 20 17:55:07.790109 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 20 17:55:07.790115 kernel: NET: Registered PF_INET protocol family Mar 20 17:55:07.790121 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 17:55:07.790127 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 20 17:55:07.790133 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 17:55:07.790139 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 20 17:55:07.790145 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 20 17:55:07.790151 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 20 17:55:07.790157 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 20 17:55:07.790164 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 20 17:55:07.790170 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 17:55:07.790176 kernel: NET: Registered PF_XDP protocol family Mar 20 17:55:07.790229 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Mar 20 17:55:07.790284 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 20 17:55:07.790338 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 20 17:55:07.790392 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 20 17:55:07.790449 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 20 17:55:07.790502 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Mar 20 17:55:07.790557 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Mar 20 17:55:07.790610 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Mar 20 17:55:07.790664 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Mar 20 17:55:07.790716 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Mar 20 17:55:07.790779 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Mar 20 17:55:07.790834 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Mar 20 17:55:07.790886 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Mar 20 17:55:07.790939 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Mar 20 17:55:07.790992 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Mar 20 17:55:07.791049 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Mar 20 17:55:07.791101 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Mar 20 17:55:07.791153 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Mar 20 17:55:07.791205 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Mar 20 17:55:07.791257 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Mar 20 17:55:07.791310 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Mar 20 17:55:07.791363 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Mar 20 17:55:07.791414 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Mar 20 17:55:07.791465 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Mar 20 17:55:07.791516 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Mar 20 17:55:07.791568 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.791620 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.791672 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.791726 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.792413 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.792476 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.792531 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.792584 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.792636 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.792687 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.792738 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.792839 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.792891 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.792943 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.792994 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793044 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793095 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793146 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793197 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793250 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793303 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793353 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793404 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793461 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793515 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793567 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793618 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793672 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793724 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793782 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793835 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793886 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.793938 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.793988 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794040 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794094 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794146 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794198 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794251 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794302 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794354 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794405 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794458 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794509 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794562 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794613 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794664 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794714 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794777 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794828 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794879 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.794931 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.794995 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795056 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795108 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795159 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795211 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795261 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795313 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795365 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795415 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795472 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795524 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795578 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795629 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.795680 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.795732 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796069 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796124 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796194 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796249 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796299 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796352 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796402 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796453 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796502 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796553 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796603 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796654 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796705 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796779 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796838 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796894 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Mar 20 17:55:07.796946 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Mar 20 17:55:07.796999 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 20 17:55:07.797051 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Mar 20 17:55:07.798704 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Mar 20 17:55:07.798817 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Mar 20 17:55:07.798875 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 20 17:55:07.798933 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Mar 20 17:55:07.798992 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Mar 20 17:55:07.799045 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Mar 20 17:55:07.799096 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Mar 20 17:55:07.799146 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Mar 20 17:55:07.799200 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Mar 20 17:55:07.799251 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Mar 20 17:55:07.799301 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Mar 20 17:55:07.799352 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Mar 20 17:55:07.799404 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Mar 20 17:55:07.799467 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Mar 20 17:55:07.799519 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Mar 20 17:55:07.799570 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Mar 20 17:55:07.799622 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Mar 20 17:55:07.799673 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Mar 20 17:55:07.799725 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Mar 20 17:55:07.799792 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Mar 20 17:55:07.799844 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Mar 20 17:55:07.799894 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 20 17:55:07.799948 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Mar 20 17:55:07.800000 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Mar 20 17:55:07.800052 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Mar 20 17:55:07.800105 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Mar 20 17:55:07.800157 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Mar 20 17:55:07.800207 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Mar 20 17:55:07.800262 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Mar 20 17:55:07.800313 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Mar 20 17:55:07.800365 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Mar 20 17:55:07.800421 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Mar 20 17:55:07.800475 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Mar 20 17:55:07.800525 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Mar 20 17:55:07.800576 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Mar 20 17:55:07.800627 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Mar 20 17:55:07.800680 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Mar 20 17:55:07.800732 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Mar 20 17:55:07.801994 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Mar 20 17:55:07.802051 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Mar 20 17:55:07.802107 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Mar 20 17:55:07.802159 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Mar 20 17:55:07.802211 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Mar 20 17:55:07.802262 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Mar 20 17:55:07.802315 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Mar 20 17:55:07.802698 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Mar 20 17:55:07.802774 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 20 17:55:07.802855 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Mar 20 17:55:07.802910 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Mar 20 17:55:07.802963 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 20 17:55:07.803015 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Mar 20 17:55:07.803067 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Mar 20 17:55:07.803120 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Mar 20 17:55:07.803173 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Mar 20 17:55:07.803224 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Mar 20 17:55:07.803276 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Mar 20 17:55:07.803332 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Mar 20 17:55:07.803384 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Mar 20 17:55:07.803435 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 20 17:55:07.803489 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Mar 20 17:55:07.803540 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Mar 20 17:55:07.803591 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Mar 20 17:55:07.803643 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 20 17:55:07.803696 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Mar 20 17:55:07.803889 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Mar 20 17:55:07.803945 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Mar 20 17:55:07.803999 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Mar 20 17:55:07.804052 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Mar 20 17:55:07.804103 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Mar 20 17:55:07.804152 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Mar 20 17:55:07.804202 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Mar 20 17:55:07.804254 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Mar 20 17:55:07.804305 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Mar 20 17:55:07.804356 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 20 17:55:07.804408 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Mar 20 17:55:07.804462 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Mar 20 17:55:07.804513 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 20 17:55:07.804566 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Mar 20 17:55:07.804616 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Mar 20 17:55:07.804667 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Mar 20 17:55:07.804721 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Mar 20 17:55:07.804779 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Mar 20 17:55:07.804830 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Mar 20 17:55:07.804882 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Mar 20 17:55:07.804932 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Mar 20 17:55:07.804988 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 20 17:55:07.805041 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Mar 20 17:55:07.805092 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Mar 20 17:55:07.805155 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Mar 20 17:55:07.805208 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Mar 20 17:55:07.805267 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Mar 20 17:55:07.805331 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Mar 20 17:55:07.805385 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Mar 20 17:55:07.805439 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Mar 20 17:55:07.805496 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Mar 20 17:55:07.805547 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Mar 20 17:55:07.805598 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Mar 20 17:55:07.805650 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Mar 20 17:55:07.805702 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Mar 20 17:55:07.806175 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 20 17:55:07.806240 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Mar 20 17:55:07.806295 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Mar 20 17:55:07.806348 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Mar 20 17:55:07.806401 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Mar 20 17:55:07.806455 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Mar 20 17:55:07.806507 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Mar 20 17:55:07.806559 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Mar 20 17:55:07.806613 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Mar 20 17:55:07.806663 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Mar 20 17:55:07.806716 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Mar 20 17:55:07.806788 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Mar 20 17:55:07.806841 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 20 17:55:07.806892 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Mar 20 17:55:07.806942 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Mar 20 17:55:07.806988 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Mar 20 17:55:07.807375 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Mar 20 17:55:07.807423 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Mar 20 17:55:07.807475 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Mar 20 17:55:07.807523 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Mar 20 17:55:07.807847 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Mar 20 17:55:07.807899 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Mar 20 17:55:07.807952 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Mar 20 17:55:07.807999 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Mar 20 17:55:07.808046 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Mar 20 17:55:07.808091 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Mar 20 17:55:07.808144 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Mar 20 17:55:07.808191 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Mar 20 17:55:07.808238 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Mar 20 17:55:07.808292 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Mar 20 17:55:07.808339 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Mar 20 17:55:07.808385 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Mar 20 17:55:07.808441 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Mar 20 17:55:07.808488 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Mar 20 17:55:07.808535 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Mar 20 17:55:07.808587 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Mar 20 17:55:07.808636 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Mar 20 17:55:07.808688 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Mar 20 17:55:07.808735 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Mar 20 17:55:07.809292 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Mar 20 17:55:07.809346 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Mar 20 17:55:07.809399 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Mar 20 17:55:07.809456 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Mar 20 17:55:07.809513 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Mar 20 17:55:07.809568 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Mar 20 17:55:07.809621 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Mar 20 17:55:07.809669 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Mar 20 17:55:07.809717 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Mar 20 17:55:07.809819 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Mar 20 17:55:07.809867 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Mar 20 17:55:07.809914 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Mar 20 17:55:07.809965 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Mar 20 17:55:07.810012 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Mar 20 17:55:07.810057 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Mar 20 17:55:07.810110 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Mar 20 17:55:07.810158 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Mar 20 17:55:07.810209 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Mar 20 17:55:07.810256 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Mar 20 17:55:07.810307 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Mar 20 17:55:07.810378 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Mar 20 17:55:07.810432 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Mar 20 17:55:07.810483 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Mar 20 17:55:07.810537 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Mar 20 17:55:07.810586 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Mar 20 17:55:07.810637 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Mar 20 17:55:07.810684 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Mar 20 17:55:07.810731 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Mar 20 17:55:07.810797 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Mar 20 17:55:07.810845 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Mar 20 17:55:07.810892 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Mar 20 17:55:07.810943 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Mar 20 17:55:07.810990 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Mar 20 17:55:07.811037 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Mar 20 17:55:07.811087 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Mar 20 17:55:07.811137 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Mar 20 17:55:07.811188 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Mar 20 17:55:07.811236 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Mar 20 17:55:07.811287 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Mar 20 17:55:07.811334 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Mar 20 17:55:07.811386 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Mar 20 17:55:07.811435 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Mar 20 17:55:07.811488 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Mar 20 17:55:07.811538 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Mar 20 17:55:07.811592 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Mar 20 17:55:07.811640 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Mar 20 17:55:07.811690 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Mar 20 17:55:07.811741 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Mar 20 17:55:07.811810 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Mar 20 17:55:07.811858 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Mar 20 17:55:07.811909 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Mar 20 17:55:07.811958 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Mar 20 17:55:07.812009 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Mar 20 17:55:07.812060 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Mar 20 17:55:07.812112 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Mar 20 17:55:07.812160 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Mar 20 17:55:07.812211 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Mar 20 17:55:07.812258 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Mar 20 17:55:07.812308 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Mar 20 17:55:07.812358 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Mar 20 17:55:07.812409 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Mar 20 17:55:07.812460 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Mar 20 17:55:07.812518 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 20 17:55:07.812529 kernel: PCI: CLS 32 bytes, default 64 Mar 20 17:55:07.812536 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 20 17:55:07.812542 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Mar 20 17:55:07.812551 kernel: clocksource: Switched to clocksource tsc Mar 20 17:55:07.812557 kernel: Initialise system trusted keyrings Mar 20 17:55:07.812564 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 20 17:55:07.812570 kernel: Key type asymmetric registered Mar 20 17:55:07.812576 kernel: Asymmetric key parser 'x509' registered Mar 20 17:55:07.812582 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 20 17:55:07.812589 kernel: io scheduler mq-deadline registered Mar 20 17:55:07.812595 kernel: io scheduler kyber registered Mar 20 17:55:07.812601 kernel: io scheduler bfq registered Mar 20 17:55:07.812656 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Mar 20 17:55:07.812711 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.812797 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Mar 20 17:55:07.812851 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.812905 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Mar 20 17:55:07.812957 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813010 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Mar 20 17:55:07.813061 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813118 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Mar 20 17:55:07.813170 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813223 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Mar 20 17:55:07.813276 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813331 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Mar 20 17:55:07.813387 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813439 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Mar 20 17:55:07.813492 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813545 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Mar 20 17:55:07.813597 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813651 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Mar 20 17:55:07.813706 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813871 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Mar 20 17:55:07.813925 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.813979 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Mar 20 17:55:07.814031 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814083 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Mar 20 17:55:07.814137 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814190 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Mar 20 17:55:07.814241 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814294 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Mar 20 17:55:07.814348 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814402 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Mar 20 17:55:07.814456 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814510 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Mar 20 17:55:07.814562 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814615 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Mar 20 17:55:07.814666 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814719 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Mar 20 17:55:07.814792 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814846 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Mar 20 17:55:07.814897 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.814950 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Mar 20 17:55:07.815002 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815056 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Mar 20 17:55:07.815112 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815166 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Mar 20 17:55:07.815218 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815271 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Mar 20 17:55:07.815324 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815380 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Mar 20 17:55:07.815435 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815490 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Mar 20 17:55:07.815541 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815595 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Mar 20 17:55:07.815646 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815701 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Mar 20 17:55:07.815802 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815858 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Mar 20 17:55:07.815910 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.815963 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Mar 20 17:55:07.816014 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.816069 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Mar 20 17:55:07.816121 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.816173 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Mar 20 17:55:07.816225 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Mar 20 17:55:07.816235 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 20 17:55:07.816243 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 17:55:07.816250 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 20 17:55:07.816256 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Mar 20 17:55:07.816263 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 20 17:55:07.816269 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 20 17:55:07.816321 kernel: rtc_cmos 00:01: registered as rtc0 Mar 20 17:55:07.816369 kernel: rtc_cmos 00:01: setting system clock to 2025-03-20T17:55:07 UTC (1742493307) Mar 20 17:55:07.816416 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Mar 20 17:55:07.816427 kernel: intel_pstate: CPU model not supported Mar 20 17:55:07.816433 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 20 17:55:07.816440 kernel: NET: Registered PF_INET6 protocol family Mar 20 17:55:07.816446 kernel: Segment Routing with IPv6 Mar 20 17:55:07.816453 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 17:55:07.816459 kernel: NET: Registered PF_PACKET protocol family Mar 20 17:55:07.816465 kernel: Key type dns_resolver registered Mar 20 17:55:07.816471 kernel: IPI shorthand broadcast: enabled Mar 20 17:55:07.816478 kernel: sched_clock: Marking stable (920004062, 226158974)->(1209466050, -63303014) Mar 20 17:55:07.816486 kernel: registered taskstats version 1 Mar 20 17:55:07.816492 kernel: Loading compiled-in X.509 certificates Mar 20 17:55:07.816498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2c0605e0441a1fddfb1f70673dce1f0d470be9b5' Mar 20 17:55:07.816504 kernel: Key type .fscrypt registered Mar 20 17:55:07.816511 kernel: Key type fscrypt-provisioning registered Mar 20 17:55:07.816517 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 17:55:07.816523 kernel: ima: Allocated hash algorithm: sha1 Mar 20 17:55:07.816529 kernel: ima: No architecture policies found Mar 20 17:55:07.816537 kernel: clk: Disabling unused clocks Mar 20 17:55:07.816544 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 20 17:55:07.816550 kernel: Write protecting the kernel read-only data: 40960k Mar 20 17:55:07.816556 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 20 17:55:07.816563 kernel: Run /init as init process Mar 20 17:55:07.816569 kernel: with arguments: Mar 20 17:55:07.816575 kernel: /init Mar 20 17:55:07.816581 kernel: with environment: Mar 20 17:55:07.816587 kernel: HOME=/ Mar 20 17:55:07.816593 kernel: TERM=linux Mar 20 17:55:07.816600 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 17:55:07.816607 systemd[1]: Successfully made /usr/ read-only. Mar 20 17:55:07.816617 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 17:55:07.816624 systemd[1]: Detected virtualization vmware. Mar 20 17:55:07.816630 systemd[1]: Detected architecture x86-64. Mar 20 17:55:07.816636 systemd[1]: Running in initrd. Mar 20 17:55:07.816642 systemd[1]: No hostname configured, using default hostname. Mar 20 17:55:07.816650 systemd[1]: Hostname set to . Mar 20 17:55:07.816656 systemd[1]: Initializing machine ID from random generator. Mar 20 17:55:07.816663 systemd[1]: Queued start job for default target initrd.target. Mar 20 17:55:07.816669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 17:55:07.816676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 17:55:07.816683 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 17:55:07.816689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 17:55:07.816696 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 17:55:07.816704 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 17:55:07.816711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 17:55:07.816718 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 17:55:07.816725 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 17:55:07.816732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 17:55:07.816738 systemd[1]: Reached target paths.target - Path Units. Mar 20 17:55:07.816750 systemd[1]: Reached target slices.target - Slice Units. Mar 20 17:55:07.816759 systemd[1]: Reached target swap.target - Swaps. Mar 20 17:55:07.816766 systemd[1]: Reached target timers.target - Timer Units. Mar 20 17:55:07.816772 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 17:55:07.816785 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 17:55:07.816792 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 17:55:07.816799 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 17:55:07.816805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 17:55:07.816812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 17:55:07.816819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 17:55:07.816826 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 17:55:07.816833 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 17:55:07.816839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 17:55:07.816846 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 17:55:07.816852 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 17:55:07.816859 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 17:55:07.816865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 17:55:07.816872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:55:07.816878 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 17:55:07.816886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 17:55:07.816893 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 17:55:07.816900 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 17:55:07.816920 systemd-journald[218]: Collecting audit messages is disabled. Mar 20 17:55:07.816938 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 17:55:07.816945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 17:55:07.816952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:55:07.816959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 17:55:07.816967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 17:55:07.816973 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 17:55:07.816980 kernel: Bridge firewalling registered Mar 20 17:55:07.816987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 17:55:07.816994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:55:07.817000 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:55:07.817007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:55:07.817013 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 17:55:07.817022 systemd-journald[218]: Journal started Mar 20 17:55:07.817037 systemd-journald[218]: Runtime Journal (/run/log/journal/1e9625204cfc4fd3be460252d9e0f033) is 4.8M, max 38.6M, 33.7M free. Mar 20 17:55:07.767270 systemd-modules-load[219]: Inserted module 'overlay' Mar 20 17:55:07.796802 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 20 17:55:07.823145 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 17:55:07.824840 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 17:55:07.826174 dracut-cmdline[240]: dracut-dracut-053 Mar 20 17:55:07.827531 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=30d38910dcb9abcb2ae1fb8c4b62196472dfae1a70f494441b86ff0de2ee88c9 Mar 20 17:55:07.834345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 17:55:07.835672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 17:55:07.863068 systemd-resolved[275]: Positive Trust Anchors: Mar 20 17:55:07.863077 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 17:55:07.863100 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 17:55:07.865618 systemd-resolved[275]: Defaulting to hostname 'linux'. Mar 20 17:55:07.866522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 17:55:07.866659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 17:55:07.877765 kernel: SCSI subsystem initialized Mar 20 17:55:07.883759 kernel: Loading iSCSI transport class v2.0-870. Mar 20 17:55:07.890798 kernel: iscsi: registered transport (tcp) Mar 20 17:55:07.903808 kernel: iscsi: registered transport (qla4xxx) Mar 20 17:55:07.903859 kernel: QLogic iSCSI HBA Driver Mar 20 17:55:07.924183 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 17:55:07.925020 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 17:55:07.944134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 17:55:07.944184 kernel: device-mapper: uevent: version 1.0.3 Mar 20 17:55:07.945232 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 17:55:07.976769 kernel: raid6: avx2x4 gen() 47397 MB/s Mar 20 17:55:07.992795 kernel: raid6: avx2x2 gen() 53374 MB/s Mar 20 17:55:08.010009 kernel: raid6: avx2x1 gen() 44714 MB/s Mar 20 17:55:08.010053 kernel: raid6: using algorithm avx2x2 gen() 53374 MB/s Mar 20 17:55:08.028022 kernel: raid6: .... xor() 32072 MB/s, rmw enabled Mar 20 17:55:08.028074 kernel: raid6: using avx2x2 recovery algorithm Mar 20 17:55:08.040758 kernel: xor: automatically using best checksumming function avx Mar 20 17:55:08.129764 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 17:55:08.135389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 17:55:08.136298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 17:55:08.148787 systemd-udevd[435]: Using default interface naming scheme 'v255'. Mar 20 17:55:08.151629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 17:55:08.152995 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 17:55:08.168357 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Mar 20 17:55:08.183268 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 17:55:08.184036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 17:55:08.266421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 17:55:08.268459 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 17:55:08.282030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 17:55:08.283296 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 17:55:08.283781 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 17:55:08.284162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 17:55:08.285199 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 17:55:08.298439 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 17:55:08.329761 kernel: libata version 3.00 loaded. Mar 20 17:55:08.331848 kernel: ata_piix 0000:00:07.1: version 2.13 Mar 20 17:55:08.348304 kernel: scsi host0: ata_piix Mar 20 17:55:08.348564 kernel: scsi host1: ata_piix Mar 20 17:55:08.348636 kernel: VMware PVSCSI driver - version 1.0.7.0-k Mar 20 17:55:08.348645 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Mar 20 17:55:08.348653 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Mar 20 17:55:08.351391 kernel: vmw_pvscsi: using 64bit dma Mar 20 17:55:08.351409 kernel: vmw_pvscsi: max_id: 16 Mar 20 17:55:08.351421 kernel: vmw_pvscsi: setting ring_pages to 8 Mar 20 17:55:08.357108 kernel: vmw_pvscsi: enabling reqCallThreshold Mar 20 17:55:08.357127 kernel: vmw_pvscsi: driver-based request coalescing enabled Mar 20 17:55:08.357136 kernel: vmw_pvscsi: using MSI-X Mar 20 17:55:08.358378 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Mar 20 17:55:08.358403 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Mar 20 17:55:08.360056 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Mar 20 17:55:08.360245 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Mar 20 17:55:08.364640 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Mar 20 17:55:08.364733 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Mar 20 17:55:08.375771 kernel: cryptd: max_cpu_qlen set to 1000 Mar 20 17:55:08.383051 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 17:55:08.383134 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:55:08.383471 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 17:55:08.383583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 17:55:08.383658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:55:08.383896 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:55:08.384535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:55:08.403044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:55:08.403872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 17:55:08.424132 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:55:08.519762 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Mar 20 17:55:08.524764 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Mar 20 17:55:08.531819 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Mar 20 17:55:08.537994 kernel: AVX2 version of gcm_enc/dec engaged. Mar 20 17:55:08.538029 kernel: AES CTR mode by8 optimization enabled Mar 20 17:55:08.545073 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Mar 20 17:55:08.552292 kernel: sd 2:0:0:0: [sda] Write Protect is off Mar 20 17:55:08.552369 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Mar 20 17:55:08.552432 kernel: sd 2:0:0:0: [sda] Cache data unavailable Mar 20 17:55:08.552492 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Mar 20 17:55:08.552549 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Mar 20 17:55:08.559324 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 20 17:55:08.559336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 20 17:55:08.559344 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Mar 20 17:55:08.559415 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 20 17:55:08.586760 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (496) Mar 20 17:55:08.596815 kernel: BTRFS: device fsid 5af3bf9c-0d36-4793-88d6-028c3ca48c10 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (485) Mar 20 17:55:08.597956 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Mar 20 17:55:08.607232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Mar 20 17:55:08.612629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Mar 20 17:55:08.617243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Mar 20 17:55:08.617378 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Mar 20 17:55:08.618092 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 17:55:08.649236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 20 17:55:09.658766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 20 17:55:09.659072 disk-uuid[595]: The operation has completed successfully. Mar 20 17:55:09.697596 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 17:55:09.697673 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 17:55:09.708203 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 17:55:09.717532 sh[612]: Success Mar 20 17:55:09.725763 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 20 17:55:09.760569 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 17:55:09.763694 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 17:55:09.767826 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 17:55:09.780826 kernel: BTRFS info (device dm-0): first mount of filesystem 5af3bf9c-0d36-4793-88d6-028c3ca48c10 Mar 20 17:55:09.780861 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 20 17:55:09.780870 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 17:55:09.783300 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 17:55:09.783323 kernel: BTRFS info (device dm-0): using free space tree Mar 20 17:55:09.790759 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 20 17:55:09.791660 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 17:55:09.792521 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Mar 20 17:55:09.794818 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 17:55:09.813010 kernel: BTRFS info (device sda6): first mount of filesystem d877ba4c-bfdd-4ad4-94ef-51dbb6b505e4 Mar 20 17:55:09.813048 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 17:55:09.813057 kernel: BTRFS info (device sda6): using free space tree Mar 20 17:55:09.818781 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 20 17:55:09.821968 kernel: BTRFS info (device sda6): last unmount of filesystem d877ba4c-bfdd-4ad4-94ef-51dbb6b505e4 Mar 20 17:55:09.823200 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 17:55:09.823888 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 17:55:09.869235 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Mar 20 17:55:09.870829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 17:55:09.919408 ignition[669]: Ignition 2.20.0 Mar 20 17:55:09.919416 ignition[669]: Stage: fetch-offline Mar 20 17:55:09.919442 ignition[669]: no configs at "/usr/lib/ignition/base.d" Mar 20 17:55:09.919447 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 20 17:55:09.919510 ignition[669]: parsed url from cmdline: "" Mar 20 17:55:09.919512 ignition[669]: no config URL provided Mar 20 17:55:09.919515 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 17:55:09.919519 ignition[669]: no config at "/usr/lib/ignition/user.ign" Mar 20 17:55:09.919899 ignition[669]: config successfully fetched Mar 20 17:55:09.919915 ignition[669]: parsing config with SHA512: 418512eb06be5f96f78078efd2f8e8d0f3bc7fd9a3abc9c2b7c62668e8248d86d68f4b487a79a0cf3e3b97dcc3b6121571d93168c121b98975798306dc678f5f Mar 20 17:55:09.923668 unknown[669]: fetched base config from "system" Mar 20 17:55:09.923928 ignition[669]: fetch-offline: fetch-offline passed Mar 20 17:55:09.923675 unknown[669]: fetched user config from "vmware" Mar 20 17:55:09.923974 ignition[669]: Ignition finished successfully Mar 20 17:55:09.924548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 17:55:09.930851 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 17:55:09.932008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 17:55:09.951034 systemd-networkd[802]: lo: Link UP Mar 20 17:55:09.951040 systemd-networkd[802]: lo: Gained carrier Mar 20 17:55:09.951832 systemd-networkd[802]: Enumeration completed Mar 20 17:55:09.952023 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 17:55:09.952080 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Mar 20 17:55:09.955567 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Mar 20 17:55:09.955693 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Mar 20 17:55:09.952182 systemd[1]: Reached target network.target - Network. Mar 20 17:55:09.952275 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 17:55:09.955242 systemd-networkd[802]: ens192: Link UP Mar 20 17:55:09.955245 systemd-networkd[802]: ens192: Gained carrier Mar 20 17:55:09.955467 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 17:55:09.968558 ignition[805]: Ignition 2.20.0 Mar 20 17:55:09.968565 ignition[805]: Stage: kargs Mar 20 17:55:09.968692 ignition[805]: no configs at "/usr/lib/ignition/base.d" Mar 20 17:55:09.968699 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 20 17:55:09.969219 ignition[805]: kargs: kargs passed Mar 20 17:55:09.969243 ignition[805]: Ignition finished successfully Mar 20 17:55:09.970596 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 17:55:09.971371 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 17:55:09.987756 ignition[812]: Ignition 2.20.0 Mar 20 17:55:09.988004 ignition[812]: Stage: disks Mar 20 17:55:09.988106 ignition[812]: no configs at "/usr/lib/ignition/base.d" Mar 20 17:55:09.988112 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 20 17:55:09.988626 ignition[812]: disks: disks passed Mar 20 17:55:09.988651 ignition[812]: Ignition finished successfully Mar 20 17:55:09.989671 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 17:55:09.990139 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 17:55:09.990393 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 17:55:09.990627 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 17:55:09.990838 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 17:55:09.991017 systemd[1]: Reached target basic.target - Basic System. Mar 20 17:55:09.991691 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 17:55:10.013437 systemd-fsck[820]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 20 17:55:10.014575 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 17:55:10.015798 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 17:55:10.086759 kernel: EXT4-fs (sda9): mounted filesystem bf9c440e-9fee-4e54-8539-b83f5a9eea2f r/w with ordered data mode. Quota mode: none. Mar 20 17:55:10.086801 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 17:55:10.087204 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 17:55:10.088195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 17:55:10.090516 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 17:55:10.090800 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 17:55:10.090829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 17:55:10.090845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 17:55:10.099561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 17:55:10.100870 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 17:55:10.108477 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (828) Mar 20 17:55:10.108516 kernel: BTRFS info (device sda6): first mount of filesystem d877ba4c-bfdd-4ad4-94ef-51dbb6b505e4 Mar 20 17:55:10.108525 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 17:55:10.108533 kernel: BTRFS info (device sda6): using free space tree Mar 20 17:55:10.113763 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 20 17:55:10.114385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 17:55:10.193388 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 17:55:10.197059 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory Mar 20 17:55:10.199220 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 17:55:10.201280 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 17:55:10.299035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 17:55:10.299691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 17:55:10.301811 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 17:55:10.311759 kernel: BTRFS info (device sda6): last unmount of filesystem d877ba4c-bfdd-4ad4-94ef-51dbb6b505e4 Mar 20 17:55:10.328381 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 17:55:10.332396 ignition[941]: INFO : Ignition 2.20.0 Mar 20 17:55:10.332396 ignition[941]: INFO : Stage: mount Mar 20 17:55:10.332730 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 17:55:10.332730 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 20 17:55:10.333072 ignition[941]: INFO : mount: mount passed Mar 20 17:55:10.333513 ignition[941]: INFO : Ignition finished successfully Mar 20 17:55:10.333810 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 17:55:10.334447 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 17:55:10.779557 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 17:55:10.780843 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 17:55:10.800764 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (953) Mar 20 17:55:10.803835 kernel: BTRFS info (device sda6): first mount of filesystem d877ba4c-bfdd-4ad4-94ef-51dbb6b505e4 Mar 20 17:55:10.803852 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 20 17:55:10.803861 kernel: BTRFS info (device sda6): using free space tree Mar 20 17:55:10.807759 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 20 17:55:10.808099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 17:55:10.819825 ignition[970]: INFO : Ignition 2.20.0 Mar 20 17:55:10.819825 ignition[970]: INFO : Stage: files Mar 20 17:55:10.820715 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 17:55:10.820715 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 20 17:55:10.821025 ignition[970]: DEBUG : files: compiled without relabeling support, skipping Mar 20 17:55:10.821430 ignition[970]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 17:55:10.821430 ignition[970]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 17:55:10.823731 ignition[970]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 17:55:10.823961 ignition[970]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 17:55:10.824323 unknown[970]: wrote ssh authorized keys file for user: core Mar 20 17:55:10.824576 ignition[970]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 17:55:10.826577 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 20 17:55:10.826577 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 20 17:55:10.870477 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 17:55:11.218246 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 20 17:55:11.218246 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 17:55:11.218621 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 20 17:55:11.584954 systemd-networkd[802]: ens192: Gained IPv6LL Mar 20 17:55:11.680604 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 17:55:11.739088 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 17:55:11.739088 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 17:55:11.739505 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 17:55:11.740561 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 17:55:11.740561 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 17:55:11.740561 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 17:55:11.740561 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 17:55:11.740561 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 17:55:11.740561 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 20 17:55:12.179630 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 17:55:12.387375 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 20 17:55:12.387375 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Mar 20 17:55:12.387829 ignition[970]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Mar 20 17:55:12.387829 ignition[970]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 20 17:55:12.392938 ignition[970]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Mar 20 17:55:12.393159 ignition[970]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 17:55:12.634509 ignition[970]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 17:55:12.637348 ignition[970]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 17:55:12.637348 ignition[970]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 17:55:12.637348 ignition[970]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 20 17:55:12.637348 ignition[970]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 17:55:12.638098 ignition[970]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 17:55:12.638098 ignition[970]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 17:55:12.638098 ignition[970]: INFO : files: files passed Mar 20 17:55:12.638098 ignition[970]: INFO : Ignition finished successfully Mar 20 17:55:12.639457 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 17:55:12.640108 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 17:55:12.641816 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 17:55:12.650058 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 17:55:12.650126 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 17:55:12.653175 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 17:55:12.653175 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 17:55:12.654186 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 17:55:12.654822 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 17:55:12.655150 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 17:55:12.655681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 17:55:12.687711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 17:55:12.687779 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 17:55:12.688169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 17:55:12.688308 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 17:55:12.688504 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 17:55:12.688951 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 17:55:12.703430 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 17:55:12.704415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 17:55:12.714257 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 17:55:12.714535 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 17:55:12.714693 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 17:55:12.714841 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 17:55:12.714913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 17:55:12.715128 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 17:55:12.715353 systemd[1]: Stopped target basic.target - Basic System. Mar 20 17:55:12.715547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 17:55:12.715736 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 17:55:12.716131 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 17:55:12.716383 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 17:55:12.716598 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 17:55:12.716877 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 17:55:12.717076 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 17:55:12.717267 systemd[1]: Stopped target swap.target - Swaps. Mar 20 17:55:12.717428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 17:55:12.717492 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 17:55:12.717776 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 17:55:12.718005 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 17:55:12.718163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 17:55:12.718209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 17:55:12.718366 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 17:55:12.718427 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 17:55:12.718663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 17:55:12.718728 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 17:55:12.718981 systemd[1]: Stopped target paths.target - Path Units. Mar 20 17:55:12.719121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 17:55:12.720856 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 17:55:12.721019 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 17:55:12.721213 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 17:55:12.721391 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 17:55:12.721459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 17:55:12.721664 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 17:55:12.721708 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 17:55:12.721953 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 17:55:12.722016 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 17:55:12.722256 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 17:55:12.722314 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 17:55:12.723054 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 17:55:12.723149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 17:55:12.723239 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 17:55:12.725640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 17:55:12.725738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 17:55:12.725838 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 17:55:12.726034 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 17:55:12.726114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 17:55:12.728865 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 17:55:12.729757 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 17:55:12.736217 ignition[1026]: INFO : Ignition 2.20.0 Mar 20 17:55:12.736530 ignition[1026]: INFO : Stage: umount Mar 20 17:55:12.736779 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 17:55:12.736923 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Mar 20 17:55:12.737654 ignition[1026]: INFO : umount: umount passed Mar 20 17:55:12.737826 ignition[1026]: INFO : Ignition finished successfully Mar 20 17:55:12.738546 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 17:55:12.738915 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 17:55:12.738978 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 17:55:12.739308 systemd[1]: Stopped target network.target - Network. Mar 20 17:55:12.739505 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 17:55:12.739536 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 17:55:12.739646 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 17:55:12.739669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 17:55:12.739779 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 17:55:12.739803 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 17:55:12.739906 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 17:55:12.739928 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 17:55:12.740212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 17:55:12.740342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 17:55:12.741849 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 17:55:12.741924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 17:55:12.744086 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 17:55:12.744232 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 17:55:12.744256 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 17:55:12.745326 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 17:55:12.747650 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 17:55:12.747715 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 17:55:12.748448 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 17:55:12.748567 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 17:55:12.748584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 17:55:12.749182 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 17:55:12.749279 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 17:55:12.749307 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 17:55:12.749440 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Mar 20 17:55:12.749465 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Mar 20 17:55:12.749586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 17:55:12.749607 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:55:12.749795 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 17:55:12.749816 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 17:55:12.749938 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 17:55:12.750911 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 17:55:12.761034 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 17:55:12.761275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 17:55:12.761647 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 17:55:12.761850 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 17:55:12.762488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 17:55:12.762524 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 17:55:12.762654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 17:55:12.762671 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 17:55:12.762858 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 17:55:12.762882 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 17:55:12.763153 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 17:55:12.763178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 17:55:12.763494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 17:55:12.763518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:55:12.764832 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 17:55:12.765082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 17:55:12.765223 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 17:55:12.765572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 17:55:12.765710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:55:12.776775 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 17:55:12.776845 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 17:55:12.843437 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 17:55:12.843504 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 17:55:12.843826 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 17:55:12.843948 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 17:55:12.843976 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 17:55:12.844546 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 17:55:12.869413 systemd[1]: Switching root. Mar 20 17:55:12.904801 systemd-journald[218]: Journal stopped Mar 20 17:55:14.322421 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 20 17:55:14.322442 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 17:55:14.322450 kernel: SELinux: policy capability open_perms=1 Mar 20 17:55:14.322456 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 17:55:14.322462 kernel: SELinux: policy capability always_check_network=0 Mar 20 17:55:14.322467 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 17:55:14.322475 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 17:55:14.322481 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 17:55:14.322487 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 17:55:14.322493 kernel: audit: type=1403 audit(1742493313.795:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 17:55:14.322500 systemd[1]: Successfully loaded SELinux policy in 31.830ms. Mar 20 17:55:14.322507 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.744ms. Mar 20 17:55:14.322514 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 17:55:14.322522 systemd[1]: Detected virtualization vmware. Mar 20 17:55:14.322529 systemd[1]: Detected architecture x86-64. Mar 20 17:55:14.322536 systemd[1]: Detected first boot. Mar 20 17:55:14.322543 systemd[1]: Initializing machine ID from random generator. Mar 20 17:55:14.322551 zram_generator::config[1072]: No configuration found. Mar 20 17:55:14.322636 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Mar 20 17:55:14.322647 kernel: Guest personality initialized and is active Mar 20 17:55:14.322653 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 20 17:55:14.322659 kernel: Initialized host personality Mar 20 17:55:14.322666 kernel: NET: Registered PF_VSOCK protocol family Mar 20 17:55:14.322673 systemd[1]: Populated /etc with preset unit settings. Mar 20 17:55:14.322682 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 20 17:55:14.322690 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Mar 20 17:55:14.322697 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 17:55:14.322703 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 17:55:14.322710 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 17:55:14.322716 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 17:55:14.322723 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 17:55:14.322732 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 17:55:14.322739 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 17:55:14.323103 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 17:55:14.323115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 17:55:14.323123 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 17:55:14.323130 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 17:55:14.323136 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 17:55:14.323143 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 17:55:14.323153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 17:55:14.323162 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 17:55:14.323170 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 17:55:14.323177 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 17:55:14.323184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 17:55:14.323191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 20 17:55:14.323198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 17:55:14.323206 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 17:55:14.323213 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 17:55:14.323220 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 17:55:14.323227 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 17:55:14.323234 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 17:55:14.323241 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 17:55:14.323248 systemd[1]: Reached target slices.target - Slice Units. Mar 20 17:55:14.323255 systemd[1]: Reached target swap.target - Swaps. Mar 20 17:55:14.323262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 17:55:14.323271 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 17:55:14.323279 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 17:55:14.323286 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 17:55:14.323293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 17:55:14.323301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 17:55:14.323308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 17:55:14.323316 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 17:55:14.323323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 17:55:14.323330 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 17:55:14.323337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 17:55:14.323344 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 17:55:14.323351 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 17:55:14.323359 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 17:55:14.323367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 17:55:14.323374 systemd[1]: Reached target machines.target - Containers. Mar 20 17:55:14.323381 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 17:55:14.323388 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Mar 20 17:55:14.323395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 17:55:14.323402 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 17:55:14.323409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 17:55:14.323416 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 17:55:14.323425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 17:55:14.323433 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 17:55:14.323440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 17:55:14.323618 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 17:55:14.323634 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 17:55:14.323642 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 17:55:14.323649 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 17:55:14.323656 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 17:55:14.323666 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 17:55:14.323674 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 17:55:14.323681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 17:55:14.323688 kernel: fuse: init (API version 7.39) Mar 20 17:55:14.323695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 17:55:14.323702 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 17:55:14.323709 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 17:55:14.323716 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 17:55:14.323728 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 17:55:14.323736 systemd[1]: Stopped verity-setup.service. Mar 20 17:55:14.323757 kernel: loop: module loaded Mar 20 17:55:14.323767 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 17:55:14.323775 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 17:55:14.323783 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 17:55:14.323790 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 17:55:14.323797 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 17:55:14.323818 systemd-journald[1165]: Collecting audit messages is disabled. Mar 20 17:55:14.323838 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 17:55:14.323845 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 17:55:14.323853 systemd-journald[1165]: Journal started Mar 20 17:55:14.323869 systemd-journald[1165]: Runtime Journal (/run/log/journal/f6f7f7a89aef4567b0675d79adc4a5fa) is 4.8M, max 38.6M, 33.7M free. Mar 20 17:55:14.174031 systemd[1]: Queued start job for default target multi-user.target. Mar 20 17:55:14.181620 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 20 17:55:14.181855 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 17:55:14.324358 jq[1142]: true Mar 20 17:55:14.326771 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 17:55:14.327804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 17:55:14.328042 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 17:55:14.329081 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 17:55:14.335990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 17:55:14.336140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 17:55:14.336474 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 17:55:14.336691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 17:55:14.336814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 17:55:14.337035 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 17:55:14.337136 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 17:55:14.337350 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 17:55:14.337444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 17:55:14.337683 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 17:55:14.338017 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 17:55:14.338237 jq[1188]: true Mar 20 17:55:14.348245 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 17:55:14.351557 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 17:55:14.354117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 17:55:14.354228 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 17:55:14.354248 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 17:55:14.354969 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 17:55:14.359423 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 17:55:14.360977 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 17:55:14.361150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 17:55:14.372905 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 17:55:14.377823 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 17:55:14.377958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 17:55:14.380954 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 17:55:14.381077 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 17:55:14.386829 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 17:55:14.388705 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 17:55:14.391719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 17:55:14.391975 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 17:55:14.392175 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 17:55:14.392312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 17:55:14.392538 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 17:55:14.395658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:55:14.396755 kernel: ACPI: bus type drm_connector registered Mar 20 17:55:14.397118 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 17:55:14.397243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 17:55:14.415134 systemd-journald[1165]: Time spent on flushing to /var/log/journal/f6f7f7a89aef4567b0675d79adc4a5fa is 47.990ms for 1850 entries. Mar 20 17:55:14.415134 systemd-journald[1165]: System Journal (/var/log/journal/f6f7f7a89aef4567b0675d79adc4a5fa) is 8M, max 584.8M, 576.8M free. Mar 20 17:55:14.483944 systemd-journald[1165]: Received client request to flush runtime journal. Mar 20 17:55:14.483968 kernel: loop0: detected capacity change from 0 to 2960 Mar 20 17:55:14.435685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 17:55:14.435870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 17:55:14.437825 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 17:55:14.475984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 17:55:14.478303 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 17:55:14.487525 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 17:55:14.504256 udevadm[1229]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 17:55:14.509385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:55:14.514594 ignition[1190]: Ignition 2.20.0 Mar 20 17:55:14.514783 ignition[1190]: deleting config from guestinfo properties Mar 20 17:55:14.564476 ignition[1190]: Successfully deleted config Mar 20 17:55:14.565269 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Mar 20 17:55:14.683764 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 17:55:14.749805 kernel: loop1: detected capacity change from 0 to 109808 Mar 20 17:55:14.873571 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 17:55:14.876577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 17:55:14.879678 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 17:55:14.913501 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Mar 20 17:55:14.913512 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Mar 20 17:55:14.913753 kernel: loop2: detected capacity change from 0 to 210664 Mar 20 17:55:14.917868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 17:55:14.954895 kernel: loop3: detected capacity change from 0 to 151640 Mar 20 17:55:15.016779 kernel: loop4: detected capacity change from 0 to 2960 Mar 20 17:55:15.039800 kernel: loop5: detected capacity change from 0 to 109808 Mar 20 17:55:15.067766 kernel: loop6: detected capacity change from 0 to 210664 Mar 20 17:55:15.088862 kernel: loop7: detected capacity change from 0 to 151640 Mar 20 17:55:15.118470 (sd-merge)[1251]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Mar 20 17:55:15.118801 (sd-merge)[1251]: Merged extensions into '/usr'. Mar 20 17:55:15.123663 systemd[1]: Reload requested from client PID 1215 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 17:55:15.123673 systemd[1]: Reloading... Mar 20 17:55:15.166776 zram_generator::config[1276]: No configuration found. Mar 20 17:55:15.233470 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 20 17:55:15.251623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:55:15.297252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 17:55:15.297397 systemd[1]: Reloading finished in 173 ms. Mar 20 17:55:15.312868 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 17:55:15.313220 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 17:55:15.319669 systemd[1]: Starting ensure-sysext.service... Mar 20 17:55:15.322844 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 17:55:15.324824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 17:55:15.339834 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Mar 20 17:55:15.339846 systemd[1]: Reloading... Mar 20 17:55:15.342484 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 17:55:15.342654 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 17:55:15.343341 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 17:55:15.343662 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Mar 20 17:55:15.344114 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Mar 20 17:55:15.350722 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 17:55:15.350727 systemd-tmpfiles[1337]: Skipping /boot Mar 20 17:55:15.359652 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 17:55:15.362768 systemd-tmpfiles[1337]: Skipping /boot Mar 20 17:55:15.372595 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Mar 20 17:55:15.412791 zram_generator::config[1367]: No configuration found. Mar 20 17:55:15.517770 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 20 17:55:15.530761 kernel: ACPI: button: Power Button [PWRF] Mar 20 17:55:15.540575 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1369) Mar 20 17:55:15.543841 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 20 17:55:15.571040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:55:15.596158 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Mar 20 17:55:15.617760 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 20 17:55:15.647123 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 20 17:55:15.647412 systemd[1]: Reloading finished in 307 ms. Mar 20 17:55:15.649507 (udev-worker)[1369]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Mar 20 17:55:15.652590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 17:55:15.657750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 17:55:15.661781 kernel: mousedev: PS/2 mouse device common for all mice Mar 20 17:55:15.679799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Mar 20 17:55:15.688020 systemd[1]: Finished ensure-sysext.service. Mar 20 17:55:15.689351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 17:55:15.691461 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 17:55:15.707889 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 17:55:15.710538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 17:55:15.715835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 17:55:15.717954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 17:55:15.719837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 17:55:15.720032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 17:55:15.720719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 17:55:15.720935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 17:55:15.722347 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 17:55:15.724532 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 17:55:15.726855 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 17:55:15.733404 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 17:55:15.735179 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 17:55:15.739456 ldconfig[1211]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 17:55:15.737803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:55:15.737924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 20 17:55:15.738453 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 17:55:15.738705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 17:55:15.738837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 17:55:15.739071 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 17:55:15.739164 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 17:55:15.739386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 17:55:15.741830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 17:55:15.742622 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 17:55:15.742718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 17:55:15.748757 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 17:55:15.748924 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 17:55:15.748957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 17:55:15.752826 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 17:55:15.814488 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 17:55:15.768276 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 17:55:15.786358 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 17:55:15.815125 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 17:55:15.824041 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 17:55:15.842185 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 17:55:15.842371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 17:55:15.844816 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 17:55:15.860238 lvm[1504]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 17:55:15.898942 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 17:55:15.899108 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 17:55:15.900423 systemd-resolved[1464]: Positive Trust Anchors: Mar 20 17:55:15.900430 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 17:55:15.900454 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 17:55:15.901667 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 17:55:15.905151 augenrules[1511]: No rules Mar 20 17:55:15.905866 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 17:55:15.906027 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 17:55:15.912885 systemd-resolved[1464]: Defaulting to hostname 'linux'. Mar 20 17:55:15.914445 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 17:55:15.914606 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 17:55:15.925342 systemd-networkd[1463]: lo: Link UP Mar 20 17:55:15.925348 systemd-networkd[1463]: lo: Gained carrier Mar 20 17:55:15.926205 systemd-networkd[1463]: Enumeration completed Mar 20 17:55:15.926268 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 17:55:15.926419 systemd-networkd[1463]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Mar 20 17:55:15.926428 systemd[1]: Reached target network.target - Network. Mar 20 17:55:15.927820 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Mar 20 17:55:15.927939 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Mar 20 17:55:15.928703 systemd-networkd[1463]: ens192: Link UP Mar 20 17:55:15.928822 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 17:55:15.931839 systemd-networkd[1463]: ens192: Gained carrier Mar 20 17:55:15.931841 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 17:55:15.933350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 17:55:15.939269 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 20 17:55:15.942349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 17:55:15.957506 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 17:55:15.968495 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 17:55:16.305670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:55:16.312815 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 17:55:16.313017 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 17:55:16.313038 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 17:55:16.313182 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 17:55:16.313295 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 17:55:16.313475 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 17:55:16.313607 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 17:55:16.313708 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 17:55:16.313814 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 17:55:16.313832 systemd[1]: Reached target paths.target - Path Units. Mar 20 17:55:16.313910 systemd[1]: Reached target timers.target - Timer Units. Mar 20 17:55:16.315149 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 17:55:16.316103 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 17:55:16.317581 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 17:55:16.317789 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 17:55:16.317910 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 17:55:16.321084 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 17:55:16.321384 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 17:55:16.321871 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 17:55:16.322015 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 17:55:16.322113 systemd[1]: Reached target basic.target - Basic System. Mar 20 17:55:16.322234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 17:55:16.322252 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 17:55:16.322911 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 17:55:16.323837 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 17:55:16.325564 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 17:55:16.326871 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 17:55:16.327082 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 17:55:16.329187 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 17:55:16.342592 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 17:55:16.344841 jq[1531]: false Mar 20 17:55:16.345067 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 17:55:16.347365 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 17:55:16.354436 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 17:55:16.355069 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 17:55:16.355987 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 17:55:16.358874 extend-filesystems[1532]: Found loop4 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found loop5 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found loop6 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found loop7 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda1 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda2 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda3 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found usr Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda4 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda6 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda7 Mar 20 17:55:16.358874 extend-filesystems[1532]: Found sda9 Mar 20 17:55:16.358874 extend-filesystems[1532]: Checking size of /dev/sda9 Mar 20 17:55:16.359994 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 17:55:16.360935 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 17:55:16.364830 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Mar 20 17:55:16.366970 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 17:55:16.367090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 17:55:16.374966 extend-filesystems[1532]: Old size kept for /dev/sda9 Mar 20 17:55:16.374966 extend-filesystems[1532]: Found sr0 Mar 20 17:55:16.374211 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 17:55:16.374804 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 17:55:16.375067 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 17:55:16.375183 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 17:55:16.376001 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 17:55:16.376110 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 17:55:16.382256 jq[1546]: true Mar 20 17:55:16.392472 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 17:55:16.406785 jq[1566]: true Mar 20 17:55:16.408895 dbus-daemon[1530]: [system] SELinux support is enabled Mar 20 17:55:16.415315 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1386) Mar 20 17:55:16.416666 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 17:55:16.425490 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Mar 20 17:55:16.426155 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 17:55:16.426186 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 17:55:16.426338 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 17:55:16.426357 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 17:55:16.428130 tar[1553]: linux-amd64/helm Mar 20 17:55:16.430681 update_engine[1541]: I20250320 17:55:16.430433 1541 main.cc:92] Flatcar Update Engine starting Mar 20 17:55:16.436395 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Mar 20 17:55:16.438286 update_engine[1541]: I20250320 17:55:16.438249 1541 update_check_scheduler.cc:74] Next update check in 11m39s Mar 20 17:55:16.438543 systemd[1]: Started update-engine.service - Update Engine. Mar 20 17:55:16.488087 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 17:55:16.493929 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Mar 20 17:55:16.520448 systemd-logind[1540]: Watching system buttons on /dev/input/event1 (Power Button) Mar 20 17:55:16.520462 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 20 17:55:16.521931 systemd-logind[1540]: New seat seat0. Mar 20 17:55:16.523497 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 17:55:16.528591 unknown[1572]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Mar 20 17:55:16.532083 unknown[1572]: Core dump limit set to -1 Mar 20 17:55:16.540082 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Mar 20 17:55:16.541139 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 17:55:16.541846 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 17:55:16.612908 locksmithd[1575]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 17:55:16.671766 containerd[1563]: time="2025-03-20T17:55:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 17:55:16.675766 containerd[1563]: time="2025-03-20T17:55:16.673019165Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 17:55:16.693354 containerd[1563]: time="2025-03-20T17:55:16.693321871Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.097µs" Mar 20 17:55:16.693354 containerd[1563]: time="2025-03-20T17:55:16.693347549Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 17:55:16.693468 containerd[1563]: time="2025-03-20T17:55:16.693365742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 17:55:16.693499 containerd[1563]: time="2025-03-20T17:55:16.693486955Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 17:55:16.693515 containerd[1563]: time="2025-03-20T17:55:16.693501495Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 17:55:16.693529 containerd[1563]: time="2025-03-20T17:55:16.693517981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 17:55:16.693566 containerd[1563]: time="2025-03-20T17:55:16.693553805Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 17:55:16.693566 containerd[1563]: time="2025-03-20T17:55:16.693563559Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 17:55:16.693726 containerd[1563]: time="2025-03-20T17:55:16.693711338Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 17:55:16.693726 containerd[1563]: time="2025-03-20T17:55:16.693722692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 17:55:16.693792 containerd[1563]: time="2025-03-20T17:55:16.693728962Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 17:55:16.693792 containerd[1563]: time="2025-03-20T17:55:16.693734410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 17:55:16.695351 containerd[1563]: time="2025-03-20T17:55:16.695328763Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 17:55:16.695763 containerd[1563]: time="2025-03-20T17:55:16.695474551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 17:55:16.695763 containerd[1563]: time="2025-03-20T17:55:16.695498415Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 17:55:16.695763 containerd[1563]: time="2025-03-20T17:55:16.695505471Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 17:55:16.695763 containerd[1563]: time="2025-03-20T17:55:16.695522117Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 17:55:16.695763 containerd[1563]: time="2025-03-20T17:55:16.695670934Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 17:55:16.695763 containerd[1563]: time="2025-03-20T17:55:16.695703467Z" level=info msg="metadata content store policy set" policy=shared Mar 20 17:55:16.701138 containerd[1563]: time="2025-03-20T17:55:16.701107826Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701149423Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701162290Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701171624Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701180018Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701186979Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701194247Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701202408Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701208820Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 17:55:16.701217 containerd[1563]: time="2025-03-20T17:55:16.701215122Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701220341Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701227047Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701298880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701311679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701319774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701326724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701338113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 17:55:16.701347 containerd[1563]: time="2025-03-20T17:55:16.701344277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701350054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701355839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701362685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701371332Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701377714Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701417189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701426491Z" level=info msg="Start snapshots syncer" Mar 20 17:55:16.701449 containerd[1563]: time="2025-03-20T17:55:16.701440192Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 17:55:16.701614 containerd[1563]: time="2025-03-20T17:55:16.701592524Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 17:55:16.701697 containerd[1563]: time="2025-03-20T17:55:16.701626522Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 17:55:16.701697 containerd[1563]: time="2025-03-20T17:55:16.701664208Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 17:55:16.701728 containerd[1563]: time="2025-03-20T17:55:16.701713655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 17:55:16.701755 containerd[1563]: time="2025-03-20T17:55:16.701726128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 17:55:16.701755 containerd[1563]: time="2025-03-20T17:55:16.701733325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 17:55:16.701755 containerd[1563]: time="2025-03-20T17:55:16.701739979Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 17:55:16.701802 containerd[1563]: time="2025-03-20T17:55:16.701758632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 17:55:16.701802 containerd[1563]: time="2025-03-20T17:55:16.701770997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 17:55:16.701802 containerd[1563]: time="2025-03-20T17:55:16.701778414Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 17:55:16.701802 containerd[1563]: time="2025-03-20T17:55:16.701794148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701804994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701810908Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701828831Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701837723Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701843393Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701848841Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 17:55:16.701857 containerd[1563]: time="2025-03-20T17:55:16.701853406Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701858916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701866592Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701877030Z" level=info msg="runtime interface created" Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701880268Z" level=info msg="created NRI interface" Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701886376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701892615Z" level=info msg="Connect containerd service" Mar 20 17:55:16.701951 containerd[1563]: time="2025-03-20T17:55:16.701907137Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 17:55:16.704762 containerd[1563]: time="2025-03-20T17:55:16.703217104Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 17:55:16.816123 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 17:55:16.830935 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 17:55:16.832227 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 17:55:16.848559 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 17:55:16.848829 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 17:55:16.856123 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872385646Z" level=info msg="Start subscribing containerd event" Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872418711Z" level=info msg="Start recovering state" Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872455831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872491507Z" level=info msg="Start event monitor" Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872504216Z" level=info msg="Start cni network conf syncer for default" Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872531001Z" level=info msg="Start streaming server" Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872539003Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872545027Z" level=info msg="runtime interface starting up..." Mar 20 17:55:16.872544 containerd[1563]: time="2025-03-20T17:55:16.872548106Z" level=info msg="starting plugins..." Mar 20 17:55:16.872734 containerd[1563]: time="2025-03-20T17:55:16.872557220Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 17:55:16.872734 containerd[1563]: time="2025-03-20T17:55:16.872492501Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 17:55:16.872734 containerd[1563]: time="2025-03-20T17:55:16.872636631Z" level=info msg="containerd successfully booted in 0.201495s" Mar 20 17:55:16.872828 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 17:55:16.875076 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 17:55:16.878863 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 17:55:16.880312 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 20 17:55:16.880858 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 17:55:16.975310 tar[1553]: linux-amd64/LICENSE Mar 20 17:55:16.975439 tar[1553]: linux-amd64/README.md Mar 20 17:55:16.988825 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 17:55:17.408878 systemd-networkd[1463]: ens192: Gained IPv6LL Mar 20 17:55:17.409528 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 20 17:55:17.410999 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 17:55:17.411507 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 17:55:17.413096 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Mar 20 17:55:17.430201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:55:17.432579 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 17:55:17.467668 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 17:55:17.505105 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 17:55:17.505239 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Mar 20 17:55:17.505919 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 17:55:18.563591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:55:18.564074 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 17:55:18.565813 systemd[1]: Startup finished in 1.005s (kernel) + 6.162s (initrd) + 4.799s (userspace) = 11.968s. Mar 20 17:55:18.568933 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:55:18.606023 login[1682]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 20 17:55:18.607788 login[1683]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 20 17:55:18.613114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 17:55:18.613972 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 17:55:18.615914 systemd-logind[1540]: New session 1 of user core. Mar 20 17:55:18.631547 systemd-logind[1540]: New session 2 of user core. Mar 20 17:55:18.642024 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 17:55:18.644116 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 17:55:18.670359 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 17:55:18.672245 systemd-logind[1540]: New session c1 of user core. Mar 20 17:55:18.875728 systemd[1725]: Queued start job for default target default.target. Mar 20 17:55:18.881617 systemd[1725]: Created slice app.slice - User Application Slice. Mar 20 17:55:18.881635 systemd[1725]: Reached target paths.target - Paths. Mar 20 17:55:18.881662 systemd[1725]: Reached target timers.target - Timers. Mar 20 17:55:18.882461 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 17:55:18.888820 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 17:55:18.888934 systemd[1725]: Reached target sockets.target - Sockets. Mar 20 17:55:18.889005 systemd[1725]: Reached target basic.target - Basic System. Mar 20 17:55:18.889073 systemd[1725]: Reached target default.target - Main User Target. Mar 20 17:55:18.889127 systemd[1725]: Startup finished in 212ms. Mar 20 17:55:18.889196 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 17:55:18.890619 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 17:55:18.891395 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 17:55:19.163066 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 20 17:55:19.531563 kubelet[1718]: E0320 17:55:19.531354 1718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:55:19.532544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:55:19.532650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:55:19.532938 systemd[1]: kubelet.service: Consumed 605ms CPU time, 243.9M memory peak. Mar 20 17:55:29.783092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 17:55:29.784321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:55:30.142309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:55:30.144704 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:55:30.169379 kubelet[1768]: E0320 17:55:30.169353 1768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:55:30.172008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:55:30.172098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:55:30.172367 systemd[1]: kubelet.service: Consumed 86ms CPU time, 97.4M memory peak. Mar 20 17:55:40.422456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 17:55:40.423634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:55:40.725883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:55:40.728251 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:55:40.788100 kubelet[1784]: E0320 17:55:40.788029 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:55:40.789258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:55:40.789339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:55:40.789733 systemd[1]: kubelet.service: Consumed 87ms CPU time, 98.1M memory peak. Mar 20 17:55:46.727081 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 17:55:46.728422 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.68.195:41682.service - OpenSSH per-connection server daemon (139.178.68.195:41682). Mar 20 17:55:46.809621 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 41682 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:46.810409 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:46.813546 systemd-logind[1540]: New session 3 of user core. Mar 20 17:55:46.819871 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 17:55:46.875974 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.68.195:41690.service - OpenSSH per-connection server daemon (139.178.68.195:41690). Mar 20 17:55:46.913088 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 41690 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:46.914286 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:46.918219 systemd-logind[1540]: New session 4 of user core. Mar 20 17:55:46.924864 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 17:55:46.973188 sshd[1800]: Connection closed by 139.178.68.195 port 41690 Mar 20 17:55:46.973109 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Mar 20 17:55:46.981771 systemd[1]: sshd@1-139.178.70.103:22-139.178.68.195:41690.service: Deactivated successfully. Mar 20 17:55:46.982840 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 17:55:46.983932 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Mar 20 17:55:46.984526 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.68.195:41700.service - OpenSSH per-connection server daemon (139.178.68.195:41700). Mar 20 17:55:46.986090 systemd-logind[1540]: Removed session 4. Mar 20 17:55:47.021830 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 41700 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:47.022611 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:47.025534 systemd-logind[1540]: New session 5 of user core. Mar 20 17:55:47.031842 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 17:55:47.077769 sshd[1808]: Connection closed by 139.178.68.195 port 41700 Mar 20 17:55:47.078563 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Mar 20 17:55:47.086605 systemd[1]: sshd@2-139.178.70.103:22-139.178.68.195:41700.service: Deactivated successfully. Mar 20 17:55:47.087598 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 17:55:47.088150 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Mar 20 17:55:47.089179 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.68.195:41708.service - OpenSSH per-connection server daemon (139.178.68.195:41708). Mar 20 17:55:47.091903 systemd-logind[1540]: Removed session 5. Mar 20 17:55:47.129334 sshd[1813]: Accepted publickey for core from 139.178.68.195 port 41708 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:47.130052 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:47.132775 systemd-logind[1540]: New session 6 of user core. Mar 20 17:55:47.150920 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 17:55:47.198819 sshd[1816]: Connection closed by 139.178.68.195 port 41708 Mar 20 17:55:47.199155 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Mar 20 17:55:47.208840 systemd[1]: sshd@3-139.178.70.103:22-139.178.68.195:41708.service: Deactivated successfully. Mar 20 17:55:47.209786 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 17:55:47.210648 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Mar 20 17:55:47.211414 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.68.195:41712.service - OpenSSH per-connection server daemon (139.178.68.195:41712). Mar 20 17:55:47.212923 systemd-logind[1540]: Removed session 6. Mar 20 17:55:47.261248 sshd[1821]: Accepted publickey for core from 139.178.68.195 port 41712 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:47.261952 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:47.264531 systemd-logind[1540]: New session 7 of user core. Mar 20 17:55:47.272868 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 17:55:47.360020 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 17:55:47.360209 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:55:47.371720 sudo[1825]: pam_unix(sudo:session): session closed for user root Mar 20 17:55:47.373640 sshd[1824]: Connection closed by 139.178.68.195 port 41712 Mar 20 17:55:47.373568 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Mar 20 17:55:47.386711 systemd[1]: sshd@4-139.178.70.103:22-139.178.68.195:41712.service: Deactivated successfully. Mar 20 17:55:47.387988 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 17:55:47.388655 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Mar 20 17:55:47.390997 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.68.195:41720.service - OpenSSH per-connection server daemon (139.178.68.195:41720). Mar 20 17:55:47.392181 systemd-logind[1540]: Removed session 7. Mar 20 17:55:47.424080 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 41720 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:47.425047 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:47.428240 systemd-logind[1540]: New session 8 of user core. Mar 20 17:55:47.438913 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 17:55:47.486894 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 17:55:47.487056 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:55:47.488955 sudo[1835]: pam_unix(sudo:session): session closed for user root Mar 20 17:55:47.492088 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 17:55:47.492400 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:55:47.498160 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 17:55:47.523460 augenrules[1857]: No rules Mar 20 17:55:47.524673 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 17:55:47.524851 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 17:55:47.525651 sudo[1834]: pam_unix(sudo:session): session closed for user root Mar 20 17:55:47.526981 sshd[1833]: Connection closed by 139.178.68.195 port 41720 Mar 20 17:55:47.527287 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Mar 20 17:55:47.537965 systemd[1]: sshd@5-139.178.70.103:22-139.178.68.195:41720.service: Deactivated successfully. Mar 20 17:55:47.539301 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 17:55:47.539865 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Mar 20 17:55:47.541268 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.68.195:41732.service - OpenSSH per-connection server daemon (139.178.68.195:41732). Mar 20 17:55:47.543793 systemd-logind[1540]: Removed session 8. Mar 20 17:55:47.575765 sshd[1865]: Accepted publickey for core from 139.178.68.195 port 41732 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:55:47.576687 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:55:47.580767 systemd-logind[1540]: New session 9 of user core. Mar 20 17:55:47.585888 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 17:55:47.633676 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 17:55:47.633839 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:55:48.449976 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 17:55:48.463056 (dockerd)[1887]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 17:55:48.983373 dockerd[1887]: time="2025-03-20T17:55:48.983335658Z" level=info msg="Starting up" Mar 20 17:55:48.985854 dockerd[1887]: time="2025-03-20T17:55:48.985835463Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 17:55:49.062988 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4286335822-merged.mount: Deactivated successfully. Mar 20 17:55:49.177599 dockerd[1887]: time="2025-03-20T17:55:49.177570088Z" level=info msg="Loading containers: start." Mar 20 17:55:49.466226 kernel: Initializing XFRM netlink socket Mar 20 17:55:49.622447 systemd-networkd[1463]: docker0: Link UP Mar 20 17:57:06.471854 systemd-resolved[1464]: Clock change detected. Flushing caches. Mar 20 17:57:06.472014 systemd-timesyncd[1465]: Contacted time server 208.67.72.50:123 (2.flatcar.pool.ntp.org). Mar 20 17:57:06.472046 systemd-timesyncd[1465]: Initial clock synchronization to Thu 2025-03-20 17:57:06.471815 UTC. Mar 20 17:57:06.512071 dockerd[1887]: time="2025-03-20T17:57:06.511997866Z" level=info msg="Loading containers: done." Mar 20 17:57:06.521786 dockerd[1887]: time="2025-03-20T17:57:06.521752340Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 17:57:06.521881 dockerd[1887]: time="2025-03-20T17:57:06.521812924Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 17:57:06.521905 dockerd[1887]: time="2025-03-20T17:57:06.521887413Z" level=info msg="Daemon has completed initialization" Mar 20 17:57:06.540741 dockerd[1887]: time="2025-03-20T17:57:06.540522259Z" level=info msg="API listen on /run/docker.sock" Mar 20 17:57:06.540643 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 17:57:06.883678 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1772794363-merged.mount: Deactivated successfully. Mar 20 17:57:07.861836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 20 17:57:07.862934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:57:07.936025 containerd[1563]: time="2025-03-20T17:57:07.935754073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 20 17:57:08.190904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:08.194138 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:57:08.235431 kubelet[2102]: E0320 17:57:08.235399 2102 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:57:08.236387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:57:08.236467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:57:08.236948 systemd[1]: kubelet.service: Consumed 104ms CPU time, 97.6M memory peak. Mar 20 17:57:08.594639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380135320.mount: Deactivated successfully. Mar 20 17:57:10.136443 containerd[1563]: time="2025-03-20T17:57:10.136145426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:10.139056 containerd[1563]: time="2025-03-20T17:57:10.138844129Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 20 17:57:10.141737 containerd[1563]: time="2025-03-20T17:57:10.141695284Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:10.143144 containerd[1563]: time="2025-03-20T17:57:10.143111595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:10.143754 containerd[1563]: time="2025-03-20T17:57:10.143660242Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.207872757s" Mar 20 17:57:10.143754 containerd[1563]: time="2025-03-20T17:57:10.143680086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 20 17:57:10.156063 containerd[1563]: time="2025-03-20T17:57:10.156035920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 20 17:57:12.016882 containerd[1563]: time="2025-03-20T17:57:12.016842552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:12.021716 containerd[1563]: time="2025-03-20T17:57:12.021680501Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 20 17:57:12.026334 containerd[1563]: time="2025-03-20T17:57:12.026286428Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:12.031512 containerd[1563]: time="2025-03-20T17:57:12.031479788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:12.031997 containerd[1563]: time="2025-03-20T17:57:12.031878185Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.875815093s" Mar 20 17:57:12.031997 containerd[1563]: time="2025-03-20T17:57:12.031899681Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 20 17:57:12.043478 containerd[1563]: time="2025-03-20T17:57:12.043455732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 20 17:57:13.722002 containerd[1563]: time="2025-03-20T17:57:13.721635146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:13.739274 containerd[1563]: time="2025-03-20T17:57:13.739236552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 20 17:57:13.750698 containerd[1563]: time="2025-03-20T17:57:13.750646355Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:13.768536 containerd[1563]: time="2025-03-20T17:57:13.768483959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:13.769519 containerd[1563]: time="2025-03-20T17:57:13.769419375Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.725938578s" Mar 20 17:57:13.769519 containerd[1563]: time="2025-03-20T17:57:13.769444424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 20 17:57:13.786055 containerd[1563]: time="2025-03-20T17:57:13.786008583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 20 17:57:15.913281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017528353.mount: Deactivated successfully. Mar 20 17:57:16.168579 containerd[1563]: time="2025-03-20T17:57:16.168498281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:16.168975 containerd[1563]: time="2025-03-20T17:57:16.168943716Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 20 17:57:16.169306 containerd[1563]: time="2025-03-20T17:57:16.169265684Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:16.170079 containerd[1563]: time="2025-03-20T17:57:16.170064596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:16.170428 containerd[1563]: time="2025-03-20T17:57:16.170412618Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.384374173s" Mar 20 17:57:16.170457 containerd[1563]: time="2025-03-20T17:57:16.170430186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 20 17:57:16.181655 containerd[1563]: time="2025-03-20T17:57:16.181624060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 17:57:16.782698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3391726019.mount: Deactivated successfully. Mar 20 17:57:17.894545 containerd[1563]: time="2025-03-20T17:57:17.893992474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:17.894545 containerd[1563]: time="2025-03-20T17:57:17.894427335Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 20 17:57:17.894545 containerd[1563]: time="2025-03-20T17:57:17.894521181Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:17.896101 containerd[1563]: time="2025-03-20T17:57:17.896088954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:17.896558 containerd[1563]: time="2025-03-20T17:57:17.896542154Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.714886367s" Mar 20 17:57:17.896589 containerd[1563]: time="2025-03-20T17:57:17.896562536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 20 17:57:17.907360 containerd[1563]: time="2025-03-20T17:57:17.907326988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 20 17:57:18.392555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 20 17:57:18.394033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:57:18.432219 update_engine[1541]: I20250320 17:57:18.431964 1541 update_attempter.cc:509] Updating boot flags... Mar 20 17:57:18.475512 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2272) Mar 20 17:57:18.669442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:18.672055 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:57:18.875257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521403590.mount: Deactivated successfully. Mar 20 17:57:18.876701 kubelet[2283]: E0320 17:57:18.876618 2283 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:57:18.879302 containerd[1563]: time="2025-03-20T17:57:18.879281557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:18.879725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:57:18.879857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:57:18.880319 containerd[1563]: time="2025-03-20T17:57:18.880295495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 20 17:57:18.880600 systemd[1]: kubelet.service: Consumed 98ms CPU time, 100.9M memory peak. Mar 20 17:57:18.881981 containerd[1563]: time="2025-03-20T17:57:18.881158088Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:18.882674 containerd[1563]: time="2025-03-20T17:57:18.882650971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:18.883366 containerd[1563]: time="2025-03-20T17:57:18.883344537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 975.884859ms" Mar 20 17:57:18.883366 containerd[1563]: time="2025-03-20T17:57:18.883364600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 20 17:57:18.895257 containerd[1563]: time="2025-03-20T17:57:18.895222324Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 20 17:57:19.533861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758296066.mount: Deactivated successfully. Mar 20 17:57:21.772410 containerd[1563]: time="2025-03-20T17:57:21.772318964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:21.778866 containerd[1563]: time="2025-03-20T17:57:21.778800270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 20 17:57:21.789247 containerd[1563]: time="2025-03-20T17:57:21.789183973Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:21.794147 containerd[1563]: time="2025-03-20T17:57:21.794102762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:21.794927 containerd[1563]: time="2025-03-20T17:57:21.794852414Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.899606192s" Mar 20 17:57:21.794927 containerd[1563]: time="2025-03-20T17:57:21.794871943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 20 17:57:23.907114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:23.907231 systemd[1]: kubelet.service: Consumed 98ms CPU time, 100.9M memory peak. Mar 20 17:57:23.908923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:57:23.925345 systemd[1]: Reload requested from client PID 2428 ('systemctl') (unit session-9.scope)... Mar 20 17:57:23.925359 systemd[1]: Reloading... Mar 20 17:57:24.009064 zram_generator::config[2475]: No configuration found. Mar 20 17:57:24.064555 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 20 17:57:24.083415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:57:24.149740 systemd[1]: Reloading finished in 224 ms. Mar 20 17:57:24.167439 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 20 17:57:24.167509 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 20 17:57:24.167705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:24.169202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:57:24.432813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:24.442238 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 17:57:24.491904 kubelet[2540]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:57:24.491904 kubelet[2540]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 17:57:24.491904 kubelet[2540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:57:24.498911 kubelet[2540]: I0320 17:57:24.498863 2540 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 17:57:24.674860 kubelet[2540]: I0320 17:57:24.674838 2540 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 20 17:57:24.674860 kubelet[2540]: I0320 17:57:24.674855 2540 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 17:57:24.675041 kubelet[2540]: I0320 17:57:24.675030 2540 server.go:927] "Client rotation is on, will bootstrap in background" Mar 20 17:57:24.692177 kubelet[2540]: I0320 17:57:24.691860 2540 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 17:57:24.693561 kubelet[2540]: E0320 17:57:24.693490 2540 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.700606 kubelet[2540]: I0320 17:57:24.700588 2540 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 17:57:24.701529 kubelet[2540]: I0320 17:57:24.701502 2540 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 17:57:24.702748 kubelet[2540]: I0320 17:57:24.701526 2540 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 20 17:57:24.703404 kubelet[2540]: I0320 17:57:24.703391 2540 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 17:57:24.703404 kubelet[2540]: I0320 17:57:24.703404 2540 container_manager_linux.go:301] "Creating device plugin manager" Mar 20 17:57:24.703483 kubelet[2540]: I0320 17:57:24.703472 2540 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:57:24.705429 kubelet[2540]: I0320 17:57:24.705358 2540 kubelet.go:400] "Attempting to sync node with API server" Mar 20 17:57:24.705429 kubelet[2540]: I0320 17:57:24.705371 2540 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 17:57:24.706046 kubelet[2540]: I0320 17:57:24.705980 2540 kubelet.go:312] "Adding apiserver pod source" Mar 20 17:57:24.706953 kubelet[2540]: W0320 17:57:24.706447 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.706953 kubelet[2540]: E0320 17:57:24.706499 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.708105 kubelet[2540]: I0320 17:57:24.707993 2540 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 17:57:24.710427 kubelet[2540]: W0320 17:57:24.710229 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.710427 kubelet[2540]: E0320 17:57:24.710254 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.710544 kubelet[2540]: I0320 17:57:24.710536 2540 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 17:57:24.711904 kubelet[2540]: I0320 17:57:24.711896 2540 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 17:57:24.713658 kubelet[2540]: W0320 17:57:24.713648 2540 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 17:57:24.715578 kubelet[2540]: I0320 17:57:24.715501 2540 server.go:1264] "Started kubelet" Mar 20 17:57:24.717213 kubelet[2540]: I0320 17:57:24.717204 2540 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 17:57:24.721401 kubelet[2540]: I0320 17:57:24.721370 2540 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 17:57:24.721915 kubelet[2540]: E0320 17:57:24.721712 2540 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e948e4bb6a42c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 17:57:24.7154883 +0000 UTC m=+0.270966973,LastTimestamp:2025-03-20 17:57:24.7154883 +0000 UTC m=+0.270966973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 17:57:24.721915 kubelet[2540]: I0320 17:57:24.721782 2540 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 20 17:57:24.721915 kubelet[2540]: I0320 17:57:24.721828 2540 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 17:57:24.722013 kubelet[2540]: I0320 17:57:24.721999 2540 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 17:57:24.722433 kubelet[2540]: I0320 17:57:24.722426 2540 server.go:455] "Adding debug handlers to kubelet server" Mar 20 17:57:24.724137 kubelet[2540]: I0320 17:57:24.723968 2540 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 17:57:24.725060 kubelet[2540]: I0320 17:57:24.725047 2540 reconciler.go:26] "Reconciler: start to sync state" Mar 20 17:57:24.725790 kubelet[2540]: E0320 17:57:24.725339 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Mar 20 17:57:24.725790 kubelet[2540]: I0320 17:57:24.725470 2540 factory.go:221] Registration of the systemd container factory successfully Mar 20 17:57:24.725790 kubelet[2540]: I0320 17:57:24.725511 2540 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 17:57:24.727518 kubelet[2540]: W0320 17:57:24.727061 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.727518 kubelet[2540]: E0320 17:57:24.727086 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.727957 kubelet[2540]: I0320 17:57:24.727913 2540 factory.go:221] Registration of the containerd container factory successfully Mar 20 17:57:24.733904 kubelet[2540]: I0320 17:57:24.733884 2540 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 17:57:24.734990 kubelet[2540]: I0320 17:57:24.734974 2540 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 17:57:24.735032 kubelet[2540]: I0320 17:57:24.734995 2540 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 17:57:24.735032 kubelet[2540]: I0320 17:57:24.735007 2540 kubelet.go:2337] "Starting kubelet main sync loop" Mar 20 17:57:24.735070 kubelet[2540]: E0320 17:57:24.735030 2540 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 17:57:24.739578 kubelet[2540]: W0320 17:57:24.739542 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.739578 kubelet[2540]: E0320 17:57:24.739577 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:24.739998 kubelet[2540]: E0320 17:57:24.739984 2540 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 17:57:24.750152 kubelet[2540]: I0320 17:57:24.750139 2540 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 17:57:24.750418 kubelet[2540]: I0320 17:57:24.750238 2540 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 17:57:24.750418 kubelet[2540]: I0320 17:57:24.750251 2540 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:57:24.751276 kubelet[2540]: I0320 17:57:24.751270 2540 policy_none.go:49] "None policy: Start" Mar 20 17:57:24.751663 kubelet[2540]: I0320 17:57:24.751652 2540 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 17:57:24.751695 kubelet[2540]: I0320 17:57:24.751671 2540 state_mem.go:35] "Initializing new in-memory state store" Mar 20 17:57:24.754891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 17:57:24.766488 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 17:57:24.768669 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 17:57:24.775582 kubelet[2540]: I0320 17:57:24.775564 2540 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 17:57:24.775705 kubelet[2540]: I0320 17:57:24.775682 2540 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 17:57:24.775705 kubelet[2540]: I0320 17:57:24.775743 2540 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 17:57:24.776933 kubelet[2540]: E0320 17:57:24.776919 2540 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 17:57:24.824846 kubelet[2540]: I0320 17:57:24.824824 2540 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 17:57:24.825069 kubelet[2540]: E0320 17:57:24.825049 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Mar 20 17:57:24.835458 kubelet[2540]: I0320 17:57:24.835416 2540 topology_manager.go:215] "Topology Admit Handler" podUID="ecbe43f75b91110501f6b369945208e9" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 20 17:57:24.836658 kubelet[2540]: I0320 17:57:24.836272 2540 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 20 17:57:24.837228 kubelet[2540]: I0320 17:57:24.836854 2540 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 20 17:57:24.841154 systemd[1]: Created slice kubepods-burstable-podecbe43f75b91110501f6b369945208e9.slice - libcontainer container kubepods-burstable-podecbe43f75b91110501f6b369945208e9.slice. Mar 20 17:57:24.854340 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 20 17:57:24.864766 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 20 17:57:24.925360 kubelet[2540]: I0320 17:57:24.925317 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:24.926169 kubelet[2540]: E0320 17:57:24.925986 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Mar 20 17:57:25.026133 kubelet[2540]: I0320 17:57:25.025866 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecbe43f75b91110501f6b369945208e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecbe43f75b91110501f6b369945208e9\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:25.026133 kubelet[2540]: I0320 17:57:25.025925 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecbe43f75b91110501f6b369945208e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecbe43f75b91110501f6b369945208e9\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:25.026133 kubelet[2540]: I0320 17:57:25.025960 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:25.026133 kubelet[2540]: I0320 17:57:25.025976 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:25.026133 kubelet[2540]: I0320 17:57:25.025987 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 20 17:57:25.026263 kubelet[2540]: I0320 17:57:25.025996 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecbe43f75b91110501f6b369945208e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecbe43f75b91110501f6b369945208e9\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:25.026263 kubelet[2540]: I0320 17:57:25.026005 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:25.026263 kubelet[2540]: I0320 17:57:25.026015 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:25.027355 kubelet[2540]: I0320 17:57:25.027310 2540 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 17:57:25.027517 kubelet[2540]: E0320 17:57:25.027502 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Mar 20 17:57:25.152668 containerd[1563]: time="2025-03-20T17:57:25.152566028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecbe43f75b91110501f6b369945208e9,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:25.162974 containerd[1563]: time="2025-03-20T17:57:25.162890054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:25.166470 containerd[1563]: time="2025-03-20T17:57:25.166447912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:25.327000 kubelet[2540]: E0320 17:57:25.326903 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Mar 20 17:57:25.429244 kubelet[2540]: I0320 17:57:25.429223 2540 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 17:57:25.429446 kubelet[2540]: E0320 17:57:25.429430 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Mar 20 17:57:25.535263 kubelet[2540]: W0320 17:57:25.535202 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:25.535263 kubelet[2540]: E0320 17:57:25.535248 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:25.571561 kubelet[2540]: W0320 17:57:25.571501 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:25.571561 kubelet[2540]: E0320 17:57:25.571546 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:25.851928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208482620.mount: Deactivated successfully. Mar 20 17:57:25.854191 containerd[1563]: time="2025-03-20T17:57:25.854164081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:57:25.854537 kubelet[2540]: W0320 17:57:25.854500 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:25.854579 kubelet[2540]: E0320 17:57:25.854547 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:25.855015 containerd[1563]: time="2025-03-20T17:57:25.854986757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 20 17:57:25.855634 containerd[1563]: time="2025-03-20T17:57:25.855618153Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:57:25.857563 containerd[1563]: time="2025-03-20T17:57:25.857496280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:57:25.857823 containerd[1563]: time="2025-03-20T17:57:25.857802289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 17:57:25.857992 containerd[1563]: time="2025-03-20T17:57:25.857969490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 691.67069ms" Mar 20 17:57:25.858656 containerd[1563]: time="2025-03-20T17:57:25.858190616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:57:25.858813 containerd[1563]: time="2025-03-20T17:57:25.858797971Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:57:25.859167 containerd[1563]: time="2025-03-20T17:57:25.859069854Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 17:57:25.860427 containerd[1563]: time="2025-03-20T17:57:25.860340430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 688.485221ms" Mar 20 17:57:25.862903 containerd[1563]: time="2025-03-20T17:57:25.862736764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 678.445639ms" Mar 20 17:57:25.929742 containerd[1563]: time="2025-03-20T17:57:25.929621802Z" level=info msg="connecting to shim 1cc1131657caa7de80a4a9e7a7e0953af928df9c2a3909dc58c7e8236623a16c" address="unix:///run/containerd/s/9351bd5056568b54abe0a2e96e1d655ee85cc5939b6c3121a9bb46557d63b97d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:57:25.932956 containerd[1563]: time="2025-03-20T17:57:25.932703405Z" level=info msg="connecting to shim a8e843b122f2b7926745f6262054d65574793bdce4bef8949fdd0254712f838f" address="unix:///run/containerd/s/07c2d0e3f631696599367aa9e66d20e9066a8e9965dee13709c647a6246e01b9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:57:25.935211 containerd[1563]: time="2025-03-20T17:57:25.935189866Z" level=info msg="connecting to shim f5adef12e51aa66458026e4cb04c3f530a0ba956d5732e7a514a855f5d0fceaf" address="unix:///run/containerd/s/c7ddeac8983d00cfdb36268edcf3bf38145b16994f969db3bc868a270966245d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:57:25.989097 systemd[1]: Started cri-containerd-1cc1131657caa7de80a4a9e7a7e0953af928df9c2a3909dc58c7e8236623a16c.scope - libcontainer container 1cc1131657caa7de80a4a9e7a7e0953af928df9c2a3909dc58c7e8236623a16c. Mar 20 17:57:25.990763 systemd[1]: Started cri-containerd-a8e843b122f2b7926745f6262054d65574793bdce4bef8949fdd0254712f838f.scope - libcontainer container a8e843b122f2b7926745f6262054d65574793bdce4bef8949fdd0254712f838f. Mar 20 17:57:25.992130 systemd[1]: Started cri-containerd-f5adef12e51aa66458026e4cb04c3f530a0ba956d5732e7a514a855f5d0fceaf.scope - libcontainer container f5adef12e51aa66458026e4cb04c3f530a0ba956d5732e7a514a855f5d0fceaf. Mar 20 17:57:26.090642 containerd[1563]: time="2025-03-20T17:57:26.090608717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cc1131657caa7de80a4a9e7a7e0953af928df9c2a3909dc58c7e8236623a16c\"" Mar 20 17:57:26.108630 containerd[1563]: time="2025-03-20T17:57:26.108281774Z" level=info msg="CreateContainer within sandbox \"1cc1131657caa7de80a4a9e7a7e0953af928df9c2a3909dc58c7e8236623a16c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 17:57:26.128389 kubelet[2540]: E0320 17:57:26.128361 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" Mar 20 17:57:26.230974 kubelet[2540]: I0320 17:57:26.230846 2540 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 17:57:26.231077 kubelet[2540]: E0320 17:57:26.231064 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Mar 20 17:57:26.250741 kubelet[2540]: W0320 17:57:26.250681 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:26.250741 kubelet[2540]: E0320 17:57:26.250733 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:26.319285 containerd[1563]: time="2025-03-20T17:57:26.319247379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5adef12e51aa66458026e4cb04c3f530a0ba956d5732e7a514a855f5d0fceaf\"" Mar 20 17:57:26.320766 containerd[1563]: time="2025-03-20T17:57:26.320744336Z" level=info msg="CreateContainer within sandbox \"f5adef12e51aa66458026e4cb04c3f530a0ba956d5732e7a514a855f5d0fceaf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 17:57:26.334135 containerd[1563]: time="2025-03-20T17:57:26.334062146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecbe43f75b91110501f6b369945208e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8e843b122f2b7926745f6262054d65574793bdce4bef8949fdd0254712f838f\"" Mar 20 17:57:26.335953 containerd[1563]: time="2025-03-20T17:57:26.335916094Z" level=info msg="CreateContainer within sandbox \"a8e843b122f2b7926745f6262054d65574793bdce4bef8949fdd0254712f838f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 17:57:26.403127 containerd[1563]: time="2025-03-20T17:57:26.403050669Z" level=info msg="Container a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:26.412699 containerd[1563]: time="2025-03-20T17:57:26.412556969Z" level=info msg="Container 38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:26.414549 containerd[1563]: time="2025-03-20T17:57:26.414374631Z" level=info msg="CreateContainer within sandbox \"1cc1131657caa7de80a4a9e7a7e0953af928df9c2a3909dc58c7e8236623a16c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551\"" Mar 20 17:57:26.415336 containerd[1563]: time="2025-03-20T17:57:26.415319070Z" level=info msg="StartContainer for \"a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551\"" Mar 20 17:57:26.415991 containerd[1563]: time="2025-03-20T17:57:26.415968961Z" level=info msg="Container cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:26.416824 containerd[1563]: time="2025-03-20T17:57:26.416802922Z" level=info msg="connecting to shim a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551" address="unix:///run/containerd/s/9351bd5056568b54abe0a2e96e1d655ee85cc5939b6c3121a9bb46557d63b97d" protocol=ttrpc version=3 Mar 20 17:57:26.418366 containerd[1563]: time="2025-03-20T17:57:26.418330885Z" level=info msg="CreateContainer within sandbox \"f5adef12e51aa66458026e4cb04c3f530a0ba956d5732e7a514a855f5d0fceaf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3\"" Mar 20 17:57:26.418890 containerd[1563]: time="2025-03-20T17:57:26.418878652Z" level=info msg="StartContainer for \"38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3\"" Mar 20 17:57:26.419835 containerd[1563]: time="2025-03-20T17:57:26.419778144Z" level=info msg="connecting to shim 38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3" address="unix:///run/containerd/s/c7ddeac8983d00cfdb36268edcf3bf38145b16994f969db3bc868a270966245d" protocol=ttrpc version=3 Mar 20 17:57:26.423878 containerd[1563]: time="2025-03-20T17:57:26.423850784Z" level=info msg="CreateContainer within sandbox \"a8e843b122f2b7926745f6262054d65574793bdce4bef8949fdd0254712f838f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928\"" Mar 20 17:57:26.424721 containerd[1563]: time="2025-03-20T17:57:26.424383881Z" level=info msg="StartContainer for \"cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928\"" Mar 20 17:57:26.425447 containerd[1563]: time="2025-03-20T17:57:26.425435385Z" level=info msg="connecting to shim cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928" address="unix:///run/containerd/s/07c2d0e3f631696599367aa9e66d20e9066a8e9965dee13709c647a6246e01b9" protocol=ttrpc version=3 Mar 20 17:57:26.431068 systemd[1]: Started cri-containerd-a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551.scope - libcontainer container a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551. Mar 20 17:57:26.445101 systemd[1]: Started cri-containerd-38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3.scope - libcontainer container 38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3. Mar 20 17:57:26.448104 systemd[1]: Started cri-containerd-cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928.scope - libcontainer container cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928. Mar 20 17:57:26.492644 containerd[1563]: time="2025-03-20T17:57:26.492539006Z" level=info msg="StartContainer for \"a6223960322cfdc26fb01c321b47613615970250197c8d55d36c4d8d8eab3551\" returns successfully" Mar 20 17:57:26.502028 containerd[1563]: time="2025-03-20T17:57:26.501953299Z" level=info msg="StartContainer for \"cecf6909234cb68e1256db3d36036149b76133c2a2fbea89a7273cb8c8806928\" returns successfully" Mar 20 17:57:26.513792 containerd[1563]: time="2025-03-20T17:57:26.513769128Z" level=info msg="StartContainer for \"38dca6c29271949e670c9daf1755da3ee9fec2fab095cfca596b479816114aa3\" returns successfully" Mar 20 17:57:26.728064 kubelet[2540]: E0320 17:57:26.727985 2540 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.103:6443: connect: connection refused Mar 20 17:57:27.777499 kubelet[2540]: E0320 17:57:27.777472 2540 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 17:57:27.832286 kubelet[2540]: I0320 17:57:27.832123 2540 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 17:57:27.846537 kubelet[2540]: I0320 17:57:27.846513 2540 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 20 17:57:27.859668 kubelet[2540]: E0320 17:57:27.859640 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:27.960466 kubelet[2540]: E0320 17:57:27.960435 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:28.061200 kubelet[2540]: E0320 17:57:28.061104 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:28.161615 kubelet[2540]: E0320 17:57:28.161580 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:28.262546 kubelet[2540]: E0320 17:57:28.262518 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:28.363194 kubelet[2540]: E0320 17:57:28.363099 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:28.463814 kubelet[2540]: E0320 17:57:28.463782 2540 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:57:28.713824 kubelet[2540]: I0320 17:57:28.713385 2540 apiserver.go:52] "Watching apiserver" Mar 20 17:57:28.713824 kubelet[2540]: E0320 17:57:28.713587 2540 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:28.725079 kubelet[2540]: I0320 17:57:28.725052 2540 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 17:57:29.462963 systemd[1]: Reload requested from client PID 2811 ('systemctl') (unit session-9.scope)... Mar 20 17:57:29.462972 systemd[1]: Reloading... Mar 20 17:57:29.529963 zram_generator::config[2858]: No configuration found. Mar 20 17:57:29.593767 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Mar 20 17:57:29.613070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:57:29.688816 systemd[1]: Reloading finished in 225 ms. Mar 20 17:57:29.704042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:57:29.707149 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 17:57:29.707292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:29.707326 systemd[1]: kubelet.service: Consumed 429ms CPU time, 115.2M memory peak. Mar 20 17:57:29.709111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:57:29.879418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:57:29.881999 (kubelet)[2923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 17:57:30.045051 kubelet[2923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:57:30.045051 kubelet[2923]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 17:57:30.045051 kubelet[2923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:57:30.047612 kubelet[2923]: I0320 17:57:30.047556 2923 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 17:57:30.052516 kubelet[2923]: I0320 17:57:30.052493 2923 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 20 17:57:30.052976 kubelet[2923]: I0320 17:57:30.052757 2923 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 17:57:30.053065 kubelet[2923]: I0320 17:57:30.053046 2923 server.go:927] "Client rotation is on, will bootstrap in background" Mar 20 17:57:30.055184 kubelet[2923]: I0320 17:57:30.055093 2923 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 17:57:30.056185 kubelet[2923]: I0320 17:57:30.055934 2923 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 17:57:30.063416 kubelet[2923]: I0320 17:57:30.063392 2923 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 17:57:30.063540 kubelet[2923]: I0320 17:57:30.063522 2923 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 17:57:30.063670 kubelet[2923]: I0320 17:57:30.063541 2923 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 20 17:57:30.063735 kubelet[2923]: I0320 17:57:30.063677 2923 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 17:57:30.063735 kubelet[2923]: I0320 17:57:30.063684 2923 container_manager_linux.go:301] "Creating device plugin manager" Mar 20 17:57:30.063735 kubelet[2923]: I0320 17:57:30.063711 2923 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:57:30.064003 kubelet[2923]: I0320 17:57:30.063766 2923 kubelet.go:400] "Attempting to sync node with API server" Mar 20 17:57:30.064003 kubelet[2923]: I0320 17:57:30.063774 2923 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 17:57:30.064003 kubelet[2923]: I0320 17:57:30.063788 2923 kubelet.go:312] "Adding apiserver pod source" Mar 20 17:57:30.064003 kubelet[2923]: I0320 17:57:30.063798 2923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 17:57:30.066066 kubelet[2923]: I0320 17:57:30.064752 2923 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 17:57:30.066288 kubelet[2923]: I0320 17:57:30.066274 2923 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 17:57:30.067713 kubelet[2923]: I0320 17:57:30.067693 2923 server.go:1264] "Started kubelet" Mar 20 17:57:30.072961 kubelet[2923]: I0320 17:57:30.072154 2923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 17:57:30.072961 kubelet[2923]: I0320 17:57:30.072149 2923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 17:57:30.072961 kubelet[2923]: I0320 17:57:30.072345 2923 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 17:57:30.072961 kubelet[2923]: I0320 17:57:30.072369 2923 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 17:57:30.073086 kubelet[2923]: I0320 17:57:30.072976 2923 server.go:455] "Adding debug handlers to kubelet server" Mar 20 17:57:30.080542 kubelet[2923]: I0320 17:57:30.080524 2923 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 20 17:57:30.080633 kubelet[2923]: I0320 17:57:30.080578 2923 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 17:57:30.080655 kubelet[2923]: I0320 17:57:30.080649 2923 reconciler.go:26] "Reconciler: start to sync state" Mar 20 17:57:30.084525 kubelet[2923]: I0320 17:57:30.084103 2923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 17:57:30.084525 kubelet[2923]: I0320 17:57:30.084207 2923 factory.go:221] Registration of the systemd container factory successfully Mar 20 17:57:30.084525 kubelet[2923]: I0320 17:57:30.084266 2923 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 17:57:30.085119 kubelet[2923]: I0320 17:57:30.084929 2923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 17:57:30.085119 kubelet[2923]: I0320 17:57:30.084956 2923 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 17:57:30.085119 kubelet[2923]: I0320 17:57:30.084971 2923 kubelet.go:2337] "Starting kubelet main sync loop" Mar 20 17:57:30.085119 kubelet[2923]: E0320 17:57:30.084992 2923 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 17:57:30.086409 kubelet[2923]: E0320 17:57:30.085782 2923 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 17:57:30.087248 kubelet[2923]: I0320 17:57:30.086962 2923 factory.go:221] Registration of the containerd container factory successfully Mar 20 17:57:30.120476 kubelet[2923]: I0320 17:57:30.120453 2923 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 17:57:30.120476 kubelet[2923]: I0320 17:57:30.120466 2923 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 17:57:30.120476 kubelet[2923]: I0320 17:57:30.120478 2923 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:57:30.120609 kubelet[2923]: I0320 17:57:30.120572 2923 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 17:57:30.120609 kubelet[2923]: I0320 17:57:30.120578 2923 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 17:57:30.120609 kubelet[2923]: I0320 17:57:30.120589 2923 policy_none.go:49] "None policy: Start" Mar 20 17:57:30.120906 kubelet[2923]: I0320 17:57:30.120880 2923 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 17:57:30.120906 kubelet[2923]: I0320 17:57:30.120889 2923 state_mem.go:35] "Initializing new in-memory state store" Mar 20 17:57:30.121017 kubelet[2923]: I0320 17:57:30.120967 2923 state_mem.go:75] "Updated machine memory state" Mar 20 17:57:30.123341 kubelet[2923]: I0320 17:57:30.123330 2923 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 17:57:30.123436 kubelet[2923]: I0320 17:57:30.123416 2923 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 17:57:30.123478 kubelet[2923]: I0320 17:57:30.123470 2923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 17:57:30.183551 kubelet[2923]: I0320 17:57:30.183533 2923 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 17:57:30.185805 kubelet[2923]: I0320 17:57:30.185391 2923 topology_manager.go:215] "Topology Admit Handler" podUID="ecbe43f75b91110501f6b369945208e9" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 20 17:57:30.185805 kubelet[2923]: I0320 17:57:30.185453 2923 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 20 17:57:30.185805 kubelet[2923]: I0320 17:57:30.185487 2923 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 20 17:57:30.191270 kubelet[2923]: I0320 17:57:30.191239 2923 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 20 17:57:30.191372 kubelet[2923]: I0320 17:57:30.191293 2923 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 20 17:57:30.282748 kubelet[2923]: I0320 17:57:30.282704 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecbe43f75b91110501f6b369945208e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecbe43f75b91110501f6b369945208e9\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:30.383227 kubelet[2923]: I0320 17:57:30.383198 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecbe43f75b91110501f6b369945208e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecbe43f75b91110501f6b369945208e9\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:30.383227 kubelet[2923]: I0320 17:57:30.383227 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:30.383342 kubelet[2923]: I0320 17:57:30.383240 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:30.383342 kubelet[2923]: I0320 17:57:30.383254 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 20 17:57:30.383342 kubelet[2923]: I0320 17:57:30.383262 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecbe43f75b91110501f6b369945208e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecbe43f75b91110501f6b369945208e9\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:30.383342 kubelet[2923]: I0320 17:57:30.383272 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:30.383342 kubelet[2923]: I0320 17:57:30.383293 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:30.383435 kubelet[2923]: I0320 17:57:30.383305 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:57:30.462789 sudo[2955]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 17:57:30.463002 sudo[2955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 17:57:30.840963 sudo[2955]: pam_unix(sudo:session): session closed for user root Mar 20 17:57:31.068769 kubelet[2923]: I0320 17:57:31.068741 2923 apiserver.go:52] "Watching apiserver" Mar 20 17:57:31.081664 kubelet[2923]: I0320 17:57:31.081621 2923 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 17:57:31.116886 kubelet[2923]: E0320 17:57:31.116821 2923 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 17:57:31.135838 kubelet[2923]: I0320 17:57:31.135535 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.13552154 podStartE2EDuration="1.13552154s" podCreationTimestamp="2025-03-20 17:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:57:31.12986096 +0000 UTC m=+1.124776396" watchObservedRunningTime="2025-03-20 17:57:31.13552154 +0000 UTC m=+1.130436969" Mar 20 17:57:31.144563 kubelet[2923]: I0320 17:57:31.143793 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.14378519 podStartE2EDuration="1.14378519s" podCreationTimestamp="2025-03-20 17:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:57:31.135840058 +0000 UTC m=+1.130755490" watchObservedRunningTime="2025-03-20 17:57:31.14378519 +0000 UTC m=+1.138700622" Mar 20 17:57:31.147827 kubelet[2923]: I0320 17:57:31.147751 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.147731407 podStartE2EDuration="1.147731407s" podCreationTimestamp="2025-03-20 17:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:57:31.14410818 +0000 UTC m=+1.139023618" watchObservedRunningTime="2025-03-20 17:57:31.147731407 +0000 UTC m=+1.142646838" Mar 20 17:57:32.323103 sudo[1869]: pam_unix(sudo:session): session closed for user root Mar 20 17:57:32.324370 sshd[1868]: Connection closed by 139.178.68.195 port 41732 Mar 20 17:57:32.324790 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Mar 20 17:57:32.326861 systemd[1]: sshd@6-139.178.70.103:22-139.178.68.195:41732.service: Deactivated successfully. Mar 20 17:57:32.328142 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 17:57:32.328299 systemd[1]: session-9.scope: Consumed 3.277s CPU time, 226.6M memory peak. Mar 20 17:57:32.329124 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Mar 20 17:57:32.329737 systemd-logind[1540]: Removed session 9. Mar 20 17:57:44.359278 kubelet[2923]: I0320 17:57:44.359255 2923 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 17:57:44.360024 containerd[1563]: time="2025-03-20T17:57:44.359655783Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 17:57:44.360281 kubelet[2923]: I0320 17:57:44.359784 2923 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 17:57:45.354017 kubelet[2923]: I0320 17:57:45.353911 2923 topology_manager.go:215] "Topology Admit Handler" podUID="64ce9d88-9333-49da-b9a8-513fc3c26e90" podNamespace="kube-system" podName="kube-proxy-5jcvv" Mar 20 17:57:45.354017 kubelet[2923]: I0320 17:57:45.354008 2923 topology_manager.go:215] "Topology Admit Handler" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" podNamespace="kube-system" podName="cilium-ml2n5" Mar 20 17:57:45.381751 kubelet[2923]: I0320 17:57:45.380978 2923 topology_manager.go:215] "Topology Admit Handler" podUID="c5d9a572-4e10-4b9a-8edf-8113cba4ece8" podNamespace="kube-system" podName="cilium-operator-599987898-2t2vw" Mar 20 17:57:45.401879 kubelet[2923]: I0320 17:57:45.401858 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-bpf-maps\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401879 kubelet[2923]: I0320 17:57:45.401877 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-run\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401981 kubelet[2923]: I0320 17:57:45.401888 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-cgroup\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401981 kubelet[2923]: I0320 17:57:45.401897 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-lib-modules\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401981 kubelet[2923]: I0320 17:57:45.401906 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-kernel\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401981 kubelet[2923]: I0320 17:57:45.401915 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hubble-tls\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401981 kubelet[2923]: I0320 17:57:45.401923 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-config-path\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.401981 kubelet[2923]: I0320 17:57:45.401950 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hostproc\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402084 kubelet[2923]: I0320 17:57:45.401962 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-clustermesh-secrets\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402084 kubelet[2923]: I0320 17:57:45.401970 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-net\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402084 kubelet[2923]: I0320 17:57:45.401980 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64ce9d88-9333-49da-b9a8-513fc3c26e90-lib-modules\") pod \"kube-proxy-5jcvv\" (UID: \"64ce9d88-9333-49da-b9a8-513fc3c26e90\") " pod="kube-system/kube-proxy-5jcvv" Mar 20 17:57:45.402084 kubelet[2923]: I0320 17:57:45.401989 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-xtables-lock\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402084 kubelet[2923]: I0320 17:57:45.401999 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm6ch\" (UniqueName: \"kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-kube-api-access-bm6ch\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402167 kubelet[2923]: I0320 17:57:45.402009 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64ce9d88-9333-49da-b9a8-513fc3c26e90-kube-proxy\") pod \"kube-proxy-5jcvv\" (UID: \"64ce9d88-9333-49da-b9a8-513fc3c26e90\") " pod="kube-system/kube-proxy-5jcvv" Mar 20 17:57:45.402167 kubelet[2923]: I0320 17:57:45.402017 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2rc\" (UniqueName: \"kubernetes.io/projected/64ce9d88-9333-49da-b9a8-513fc3c26e90-kube-api-access-7c2rc\") pod \"kube-proxy-5jcvv\" (UID: \"64ce9d88-9333-49da-b9a8-513fc3c26e90\") " pod="kube-system/kube-proxy-5jcvv" Mar 20 17:57:45.402167 kubelet[2923]: I0320 17:57:45.402026 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-etc-cni-netd\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402167 kubelet[2923]: I0320 17:57:45.402034 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64ce9d88-9333-49da-b9a8-513fc3c26e90-xtables-lock\") pod \"kube-proxy-5jcvv\" (UID: \"64ce9d88-9333-49da-b9a8-513fc3c26e90\") " pod="kube-system/kube-proxy-5jcvv" Mar 20 17:57:45.402167 kubelet[2923]: I0320 17:57:45.402043 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cni-path\") pod \"cilium-ml2n5\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " pod="kube-system/cilium-ml2n5" Mar 20 17:57:45.402687 systemd[1]: Created slice kubepods-besteffort-pod64ce9d88_9333_49da_b9a8_513fc3c26e90.slice - libcontainer container kubepods-besteffort-pod64ce9d88_9333_49da_b9a8_513fc3c26e90.slice. Mar 20 17:57:45.415173 systemd[1]: Created slice kubepods-burstable-pod4ff71a17_d60c_4aa8_b527_c5a5a9108b50.slice - libcontainer container kubepods-burstable-pod4ff71a17_d60c_4aa8_b527_c5a5a9108b50.slice. Mar 20 17:57:45.419811 systemd[1]: Created slice kubepods-besteffort-podc5d9a572_4e10_4b9a_8edf_8113cba4ece8.slice - libcontainer container kubepods-besteffort-podc5d9a572_4e10_4b9a_8edf_8113cba4ece8.slice. Mar 20 17:57:45.502349 kubelet[2923]: I0320 17:57:45.502168 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwrp5\" (UniqueName: \"kubernetes.io/projected/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-kube-api-access-vwrp5\") pod \"cilium-operator-599987898-2t2vw\" (UID: \"c5d9a572-4e10-4b9a-8edf-8113cba4ece8\") " pod="kube-system/cilium-operator-599987898-2t2vw" Mar 20 17:57:45.502349 kubelet[2923]: I0320 17:57:45.502218 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-cilium-config-path\") pod \"cilium-operator-599987898-2t2vw\" (UID: \"c5d9a572-4e10-4b9a-8edf-8113cba4ece8\") " pod="kube-system/cilium-operator-599987898-2t2vw" Mar 20 17:57:45.761034 containerd[1563]: time="2025-03-20T17:57:45.761007420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2t2vw,Uid:c5d9a572-4e10-4b9a-8edf-8113cba4ece8,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:45.761271 containerd[1563]: time="2025-03-20T17:57:45.761007404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ml2n5,Uid:4ff71a17-d60c-4aa8-b527-c5a5a9108b50,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:45.766760 containerd[1563]: time="2025-03-20T17:57:45.766739148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcvv,Uid:64ce9d88-9333-49da-b9a8-513fc3c26e90,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:45.952725 containerd[1563]: time="2025-03-20T17:57:45.952679417Z" level=info msg="connecting to shim 42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812" address="unix:///run/containerd/s/e23c65491ce671c4fece6e5baddcc285775acd4352557cf3e969f344b35bc987" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:57:45.960580 containerd[1563]: time="2025-03-20T17:57:45.960473032Z" level=info msg="connecting to shim aa387ad731f1b537ac480308238c82623a9260549d53389ecf883b0f0891decf" address="unix:///run/containerd/s/9bd92f3378c425fc373550cf82be95e9c3fb978f042ad0804cf426fd01df307d" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:57:45.973671 containerd[1563]: time="2025-03-20T17:57:45.973642871Z" level=info msg="connecting to shim 48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af" address="unix:///run/containerd/s/729cac1754cdd259ebb5993458cada92b1905fff2f30167e6ad1cb17a5cf06ed" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:57:45.974313 systemd[1]: Started cri-containerd-42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812.scope - libcontainer container 42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812. Mar 20 17:57:45.986062 systemd[1]: Started cri-containerd-aa387ad731f1b537ac480308238c82623a9260549d53389ecf883b0f0891decf.scope - libcontainer container aa387ad731f1b537ac480308238c82623a9260549d53389ecf883b0f0891decf. Mar 20 17:57:46.002061 systemd[1]: Started cri-containerd-48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af.scope - libcontainer container 48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af. Mar 20 17:57:46.027119 containerd[1563]: time="2025-03-20T17:57:46.026787868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcvv,Uid:64ce9d88-9333-49da-b9a8-513fc3c26e90,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa387ad731f1b537ac480308238c82623a9260549d53389ecf883b0f0891decf\"" Mar 20 17:57:46.033538 containerd[1563]: time="2025-03-20T17:57:46.033519897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ml2n5,Uid:4ff71a17-d60c-4aa8-b527-c5a5a9108b50,Namespace:kube-system,Attempt:0,} returns sandbox id \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\"" Mar 20 17:57:46.039129 containerd[1563]: time="2025-03-20T17:57:46.039106942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2t2vw,Uid:c5d9a572-4e10-4b9a-8edf-8113cba4ece8,Namespace:kube-system,Attempt:0,} returns sandbox id \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\"" Mar 20 17:57:46.062203 containerd[1563]: time="2025-03-20T17:57:46.062069483Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 17:57:46.068814 containerd[1563]: time="2025-03-20T17:57:46.068785426Z" level=info msg="CreateContainer within sandbox \"aa387ad731f1b537ac480308238c82623a9260549d53389ecf883b0f0891decf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 17:57:46.073730 containerd[1563]: time="2025-03-20T17:57:46.073704181Z" level=info msg="Container 50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:46.078701 containerd[1563]: time="2025-03-20T17:57:46.078676687Z" level=info msg="CreateContainer within sandbox \"aa387ad731f1b537ac480308238c82623a9260549d53389ecf883b0f0891decf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257\"" Mar 20 17:57:46.079401 containerd[1563]: time="2025-03-20T17:57:46.079383698Z" level=info msg="StartContainer for \"50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257\"" Mar 20 17:57:46.080150 containerd[1563]: time="2025-03-20T17:57:46.080131866Z" level=info msg="connecting to shim 50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257" address="unix:///run/containerd/s/9bd92f3378c425fc373550cf82be95e9c3fb978f042ad0804cf426fd01df307d" protocol=ttrpc version=3 Mar 20 17:57:46.097240 systemd[1]: Started cri-containerd-50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257.scope - libcontainer container 50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257. Mar 20 17:57:46.125073 containerd[1563]: time="2025-03-20T17:57:46.125050449Z" level=info msg="StartContainer for \"50ab0c7f54dc412ea9b0ac28738a52e19c1b2a35f99b43061d9c37c4fde2f257\" returns successfully" Mar 20 17:57:49.614157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183461782.mount: Deactivated successfully. Mar 20 17:57:51.130134 containerd[1563]: time="2025-03-20T17:57:51.130090837Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:51.130913 containerd[1563]: time="2025-03-20T17:57:51.130879961Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 20 17:57:51.131242 containerd[1563]: time="2025-03-20T17:57:51.131215010Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:51.132407 containerd[1563]: time="2025-03-20T17:57:51.132183609Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.070085474s" Mar 20 17:57:51.132407 containerd[1563]: time="2025-03-20T17:57:51.132209981Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 20 17:57:51.134711 containerd[1563]: time="2025-03-20T17:57:51.134393675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 17:57:51.136861 containerd[1563]: time="2025-03-20T17:57:51.136839181Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 17:57:51.155728 containerd[1563]: time="2025-03-20T17:57:51.155696228Z" level=info msg="Container 17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:51.156861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713784200.mount: Deactivated successfully. Mar 20 17:57:51.164423 containerd[1563]: time="2025-03-20T17:57:51.164395631Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\"" Mar 20 17:57:51.165348 containerd[1563]: time="2025-03-20T17:57:51.164884262Z" level=info msg="StartContainer for \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\"" Mar 20 17:57:51.166403 containerd[1563]: time="2025-03-20T17:57:51.166340710Z" level=info msg="connecting to shim 17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd" address="unix:///run/containerd/s/729cac1754cdd259ebb5993458cada92b1905fff2f30167e6ad1cb17a5cf06ed" protocol=ttrpc version=3 Mar 20 17:57:51.295116 systemd[1]: Started cri-containerd-17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd.scope - libcontainer container 17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd. Mar 20 17:57:51.319477 containerd[1563]: time="2025-03-20T17:57:51.319454456Z" level=info msg="StartContainer for \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" returns successfully" Mar 20 17:57:51.328368 systemd[1]: cri-containerd-17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd.scope: Deactivated successfully. Mar 20 17:57:51.350567 containerd[1563]: time="2025-03-20T17:57:51.350536654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" id:\"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" pid:3324 exited_at:{seconds:1742493471 nanos:333801077}" Mar 20 17:57:51.372267 containerd[1563]: time="2025-03-20T17:57:51.372230277Z" level=info msg="received exit event container_id:\"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" id:\"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" pid:3324 exited_at:{seconds:1742493471 nanos:333801077}" Mar 20 17:57:51.395194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd-rootfs.mount: Deactivated successfully. Mar 20 17:57:52.230245 containerd[1563]: time="2025-03-20T17:57:52.230215876Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 17:57:52.248985 containerd[1563]: time="2025-03-20T17:57:52.248056702Z" level=info msg="Container 8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:52.259029 kubelet[2923]: I0320 17:57:52.251282 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jcvv" podStartSLOduration=7.251263101 podStartE2EDuration="7.251263101s" podCreationTimestamp="2025-03-20 17:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:57:46.137385016 +0000 UTC m=+16.132300462" watchObservedRunningTime="2025-03-20 17:57:52.251263101 +0000 UTC m=+22.246178549" Mar 20 17:57:52.270018 containerd[1563]: time="2025-03-20T17:57:52.269934002Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\"" Mar 20 17:57:52.272211 containerd[1563]: time="2025-03-20T17:57:52.272062369Z" level=info msg="StartContainer for \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\"" Mar 20 17:57:52.272721 containerd[1563]: time="2025-03-20T17:57:52.272695776Z" level=info msg="connecting to shim 8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d" address="unix:///run/containerd/s/729cac1754cdd259ebb5993458cada92b1905fff2f30167e6ad1cb17a5cf06ed" protocol=ttrpc version=3 Mar 20 17:57:52.288032 systemd[1]: Started cri-containerd-8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d.scope - libcontainer container 8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d. Mar 20 17:57:52.305123 containerd[1563]: time="2025-03-20T17:57:52.305092913Z" level=info msg="StartContainer for \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" returns successfully" Mar 20 17:57:52.310983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 17:57:52.311309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:57:52.311643 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:57:52.314550 containerd[1563]: time="2025-03-20T17:57:52.314476894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" id:\"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" pid:3370 exited_at:{seconds:1742493472 nanos:314225309}" Mar 20 17:57:52.314536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:57:52.315861 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 17:57:52.316136 containerd[1563]: time="2025-03-20T17:57:52.316075481Z" level=info msg="received exit event container_id:\"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" id:\"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" pid:3370 exited_at:{seconds:1742493472 nanos:314225309}" Mar 20 17:57:52.316768 systemd[1]: cri-containerd-8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d.scope: Deactivated successfully. Mar 20 17:57:52.331001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d-rootfs.mount: Deactivated successfully. Mar 20 17:57:52.345630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:57:53.234967 containerd[1563]: time="2025-03-20T17:57:53.234607030Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 17:57:53.242147 containerd[1563]: time="2025-03-20T17:57:53.241441734Z" level=info msg="Container 76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:53.303677 containerd[1563]: time="2025-03-20T17:57:53.303654843Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\"" Mar 20 17:57:53.304177 containerd[1563]: time="2025-03-20T17:57:53.304160832Z" level=info msg="StartContainer for \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\"" Mar 20 17:57:53.305026 containerd[1563]: time="2025-03-20T17:57:53.305009663Z" level=info msg="connecting to shim 76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb" address="unix:///run/containerd/s/729cac1754cdd259ebb5993458cada92b1905fff2f30167e6ad1cb17a5cf06ed" protocol=ttrpc version=3 Mar 20 17:57:53.324063 systemd[1]: Started cri-containerd-76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb.scope - libcontainer container 76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb. Mar 20 17:57:53.354252 containerd[1563]: time="2025-03-20T17:57:53.354218493Z" level=info msg="StartContainer for \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" returns successfully" Mar 20 17:57:53.370655 systemd[1]: cri-containerd-76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb.scope: Deactivated successfully. Mar 20 17:57:53.370910 systemd[1]: cri-containerd-76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb.scope: Consumed 16ms CPU time, 5.5M memory peak, 1M read from disk. Mar 20 17:57:53.371794 containerd[1563]: time="2025-03-20T17:57:53.371766868Z" level=info msg="received exit event container_id:\"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" id:\"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" pid:3427 exited_at:{seconds:1742493473 nanos:371573358}" Mar 20 17:57:53.372021 containerd[1563]: time="2025-03-20T17:57:53.372001590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" id:\"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" pid:3427 exited_at:{seconds:1742493473 nanos:371573358}" Mar 20 17:57:53.385563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb-rootfs.mount: Deactivated successfully. Mar 20 17:57:54.244141 containerd[1563]: time="2025-03-20T17:57:54.243490357Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 17:57:54.282334 containerd[1563]: time="2025-03-20T17:57:54.282303757Z" level=info msg="Container 6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:54.311965 containerd[1563]: time="2025-03-20T17:57:54.311892490Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\"" Mar 20 17:57:54.312471 containerd[1563]: time="2025-03-20T17:57:54.312358036Z" level=info msg="StartContainer for \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\"" Mar 20 17:57:54.313958 containerd[1563]: time="2025-03-20T17:57:54.313924099Z" level=info msg="connecting to shim 6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7" address="unix:///run/containerd/s/729cac1754cdd259ebb5993458cada92b1905fff2f30167e6ad1cb17a5cf06ed" protocol=ttrpc version=3 Mar 20 17:57:54.335101 systemd[1]: Started cri-containerd-6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7.scope - libcontainer container 6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7. Mar 20 17:57:54.359065 systemd[1]: cri-containerd-6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7.scope: Deactivated successfully. Mar 20 17:57:54.360554 containerd[1563]: time="2025-03-20T17:57:54.359706316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" id:\"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" pid:3465 exited_at:{seconds:1742493474 nanos:359439150}" Mar 20 17:57:54.402962 containerd[1563]: time="2025-03-20T17:57:54.402302209Z" level=info msg="received exit event container_id:\"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" id:\"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" pid:3465 exited_at:{seconds:1742493474 nanos:359439150}" Mar 20 17:57:54.406773 containerd[1563]: time="2025-03-20T17:57:54.406755677Z" level=info msg="StartContainer for \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" returns successfully" Mar 20 17:57:54.415413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7-rootfs.mount: Deactivated successfully. Mar 20 17:57:54.728477 containerd[1563]: time="2025-03-20T17:57:54.728159501Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:54.739264 containerd[1563]: time="2025-03-20T17:57:54.736956817Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 20 17:57:54.746002 containerd[1563]: time="2025-03-20T17:57:54.745972162Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:57:54.747433 containerd[1563]: time="2025-03-20T17:57:54.747409947Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.612988192s" Mar 20 17:57:54.752821 containerd[1563]: time="2025-03-20T17:57:54.747439171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 20 17:57:54.752821 containerd[1563]: time="2025-03-20T17:57:54.749445993Z" level=info msg="CreateContainer within sandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 17:57:54.822508 containerd[1563]: time="2025-03-20T17:57:54.822436906Z" level=info msg="Container 2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:54.860718 containerd[1563]: time="2025-03-20T17:57:54.860670405Z" level=info msg="CreateContainer within sandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\"" Mar 20 17:57:54.861755 containerd[1563]: time="2025-03-20T17:57:54.860968354Z" level=info msg="StartContainer for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\"" Mar 20 17:57:54.862248 containerd[1563]: time="2025-03-20T17:57:54.862233296Z" level=info msg="connecting to shim 2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd" address="unix:///run/containerd/s/e23c65491ce671c4fece6e5baddcc285775acd4352557cf3e969f344b35bc987" protocol=ttrpc version=3 Mar 20 17:57:54.879166 systemd[1]: Started cri-containerd-2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd.scope - libcontainer container 2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd. Mar 20 17:57:54.899163 containerd[1563]: time="2025-03-20T17:57:54.898817809Z" level=info msg="StartContainer for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" returns successfully" Mar 20 17:57:55.257824 containerd[1563]: time="2025-03-20T17:57:55.255928780Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 17:57:55.319117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853431914.mount: Deactivated successfully. Mar 20 17:57:55.319812 containerd[1563]: time="2025-03-20T17:57:55.319794408Z" level=info msg="Container c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:57:55.357679 containerd[1563]: time="2025-03-20T17:57:55.357658161Z" level=info msg="CreateContainer within sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\"" Mar 20 17:57:55.358166 containerd[1563]: time="2025-03-20T17:57:55.358152578Z" level=info msg="StartContainer for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\"" Mar 20 17:57:55.358808 containerd[1563]: time="2025-03-20T17:57:55.358795931Z" level=info msg="connecting to shim c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b" address="unix:///run/containerd/s/729cac1754cdd259ebb5993458cada92b1905fff2f30167e6ad1cb17a5cf06ed" protocol=ttrpc version=3 Mar 20 17:57:55.382080 systemd[1]: Started cri-containerd-c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b.scope - libcontainer container c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b. Mar 20 17:57:55.455565 containerd[1563]: time="2025-03-20T17:57:55.455539594Z" level=info msg="StartContainer for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" returns successfully" Mar 20 17:57:55.737735 containerd[1563]: time="2025-03-20T17:57:55.737694621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" id:\"34e01e759e9af9bd5bd26c9fdeacb184ef7d9f47dbeb3d8c18ddae2db76b5534\" pid:3571 exited_at:{seconds:1742493475 nanos:736902491}" Mar 20 17:57:55.810037 kubelet[2923]: I0320 17:57:55.809975 2923 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 20 17:57:55.941341 systemd[1]: Created slice kubepods-burstable-poddb8eebfa_b64e_42de_9f73_e9d0a42bd562.slice - libcontainer container kubepods-burstable-poddb8eebfa_b64e_42de_9f73_e9d0a42bd562.slice. Mar 20 17:57:56.325384 containerd[1563]: time="2025-03-20T17:57:56.250425796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c8h8k,Uid:db8eebfa-b64e-42de-9f73-e9d0a42bd562,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:56.325384 containerd[1563]: time="2025-03-20T17:57:56.274319995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5n5l4,Uid:1564e08a-f769-4a1a-a22d-05e92b14ddbd,Namespace:kube-system,Attempt:0,}" Mar 20 17:57:56.325617 kubelet[2923]: I0320 17:57:55.933310 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2t2vw" podStartSLOduration=2.247366812 podStartE2EDuration="10.933294349s" podCreationTimestamp="2025-03-20 17:57:45 +0000 UTC" firstStartedPulling="2025-03-20 17:57:46.061980017 +0000 UTC m=+16.056895453" lastFinishedPulling="2025-03-20 17:57:54.747907555 +0000 UTC m=+24.742822990" observedRunningTime="2025-03-20 17:57:55.347136709 +0000 UTC m=+25.342052145" watchObservedRunningTime="2025-03-20 17:57:55.933294349 +0000 UTC m=+25.928209790" Mar 20 17:57:56.325617 kubelet[2923]: I0320 17:57:55.933747 2923 topology_manager.go:215] "Topology Admit Handler" podUID="db8eebfa-b64e-42de-9f73-e9d0a42bd562" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c8h8k" Mar 20 17:57:56.325617 kubelet[2923]: I0320 17:57:55.965548 2923 topology_manager.go:215] "Topology Admit Handler" podUID="1564e08a-f769-4a1a-a22d-05e92b14ddbd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5n5l4" Mar 20 17:57:56.325617 kubelet[2923]: I0320 17:57:56.075775 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kprk\" (UniqueName: \"kubernetes.io/projected/1564e08a-f769-4a1a-a22d-05e92b14ddbd-kube-api-access-2kprk\") pod \"coredns-7db6d8ff4d-5n5l4\" (UID: \"1564e08a-f769-4a1a-a22d-05e92b14ddbd\") " pod="kube-system/coredns-7db6d8ff4d-5n5l4" Mar 20 17:57:56.325617 kubelet[2923]: I0320 17:57:56.075809 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db8eebfa-b64e-42de-9f73-e9d0a42bd562-config-volume\") pod \"coredns-7db6d8ff4d-c8h8k\" (UID: \"db8eebfa-b64e-42de-9f73-e9d0a42bd562\") " pod="kube-system/coredns-7db6d8ff4d-c8h8k" Mar 20 17:57:55.971328 systemd[1]: Created slice kubepods-burstable-pod1564e08a_f769_4a1a_a22d_05e92b14ddbd.slice - libcontainer container kubepods-burstable-pod1564e08a_f769_4a1a_a22d_05e92b14ddbd.slice. Mar 20 17:57:56.492134 kubelet[2923]: I0320 17:57:56.075824 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6h6f\" (UniqueName: \"kubernetes.io/projected/db8eebfa-b64e-42de-9f73-e9d0a42bd562-kube-api-access-s6h6f\") pod \"coredns-7db6d8ff4d-c8h8k\" (UID: \"db8eebfa-b64e-42de-9f73-e9d0a42bd562\") " pod="kube-system/coredns-7db6d8ff4d-c8h8k" Mar 20 17:57:56.492134 kubelet[2923]: I0320 17:57:56.075839 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1564e08a-f769-4a1a-a22d-05e92b14ddbd-config-volume\") pod \"coredns-7db6d8ff4d-5n5l4\" (UID: \"1564e08a-f769-4a1a-a22d-05e92b14ddbd\") " pod="kube-system/coredns-7db6d8ff4d-5n5l4" Mar 20 17:57:56.492134 kubelet[2923]: I0320 17:57:56.274683 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ml2n5" podStartSLOduration=6.202485279 podStartE2EDuration="11.274577633s" podCreationTimestamp="2025-03-20 17:57:45 +0000 UTC" firstStartedPulling="2025-03-20 17:57:46.061790377 +0000 UTC m=+16.056705810" lastFinishedPulling="2025-03-20 17:57:51.133882734 +0000 UTC m=+21.128798164" observedRunningTime="2025-03-20 17:57:56.27384008 +0000 UTC m=+26.268755525" watchObservedRunningTime="2025-03-20 17:57:56.274577633 +0000 UTC m=+26.269493073" Mar 20 17:57:59.122667 systemd-networkd[1463]: cilium_host: Link UP Mar 20 17:57:59.123478 systemd-networkd[1463]: cilium_net: Link UP Mar 20 17:57:59.124222 systemd-networkd[1463]: cilium_net: Gained carrier Mar 20 17:57:59.124353 systemd-networkd[1463]: cilium_host: Gained carrier Mar 20 17:57:59.250046 systemd-networkd[1463]: cilium_vxlan: Link UP Mar 20 17:57:59.250051 systemd-networkd[1463]: cilium_vxlan: Gained carrier Mar 20 17:57:59.567050 systemd-networkd[1463]: cilium_net: Gained IPv6LL Mar 20 17:57:59.723966 kernel: NET: Registered PF_ALG protocol family Mar 20 17:57:59.991037 systemd-networkd[1463]: cilium_host: Gained IPv6LL Mar 20 17:58:00.437040 systemd-networkd[1463]: lxc_health: Link UP Mar 20 17:58:00.443171 systemd-networkd[1463]: lxc_health: Gained carrier Mar 20 17:58:00.611707 systemd-networkd[1463]: lxc84a3adaab249: Link UP Mar 20 17:58:00.616952 kernel: eth0: renamed from tmpd4048 Mar 20 17:58:00.623700 systemd-networkd[1463]: lxc84a3adaab249: Gained carrier Mar 20 17:58:00.625218 systemd-networkd[1463]: lxc7bf721bf00eb: Link UP Mar 20 17:58:00.630977 kernel: eth0: renamed from tmpb0dcf Mar 20 17:58:00.638893 systemd-networkd[1463]: lxc7bf721bf00eb: Gained carrier Mar 20 17:58:00.952489 systemd-networkd[1463]: cilium_vxlan: Gained IPv6LL Mar 20 17:58:01.975033 systemd-networkd[1463]: lxc_health: Gained IPv6LL Mar 20 17:58:02.295026 systemd-networkd[1463]: lxc84a3adaab249: Gained IPv6LL Mar 20 17:58:02.359038 systemd-networkd[1463]: lxc7bf721bf00eb: Gained IPv6LL Mar 20 17:58:03.298439 containerd[1563]: time="2025-03-20T17:58:03.298113181Z" level=info msg="connecting to shim b0dcff95b60cba3f7f9b2637f2c2f7caeadedbe1b16d03f94d105df184bc4930" address="unix:///run/containerd/s/1f7fb308c51c8379ac9a91c3e9d7129ddcbc3b658fb732338d12e670fbec4556" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:58:03.300059 containerd[1563]: time="2025-03-20T17:58:03.300043513Z" level=info msg="connecting to shim d4048b529463ad1fb0683c6594684a110fd975fed9989d18a658918d36a3c69c" address="unix:///run/containerd/s/2bbdf3550c277f1ed782b0d7669bcfd37781fabdc4b0a0cb3694754e17ba6935" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:58:03.330274 systemd[1]: Started cri-containerd-b0dcff95b60cba3f7f9b2637f2c2f7caeadedbe1b16d03f94d105df184bc4930.scope - libcontainer container b0dcff95b60cba3f7f9b2637f2c2f7caeadedbe1b16d03f94d105df184bc4930. Mar 20 17:58:03.331222 systemd[1]: Started cri-containerd-d4048b529463ad1fb0683c6594684a110fd975fed9989d18a658918d36a3c69c.scope - libcontainer container d4048b529463ad1fb0683c6594684a110fd975fed9989d18a658918d36a3c69c. Mar 20 17:58:03.343932 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 17:58:03.344978 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 17:58:03.381972 containerd[1563]: time="2025-03-20T17:58:03.381312282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5n5l4,Uid:1564e08a-f769-4a1a-a22d-05e92b14ddbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0dcff95b60cba3f7f9b2637f2c2f7caeadedbe1b16d03f94d105df184bc4930\"" Mar 20 17:58:03.386878 containerd[1563]: time="2025-03-20T17:58:03.386861243Z" level=info msg="CreateContainer within sandbox \"b0dcff95b60cba3f7f9b2637f2c2f7caeadedbe1b16d03f94d105df184bc4930\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 17:58:03.388675 containerd[1563]: time="2025-03-20T17:58:03.388660362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c8h8k,Uid:db8eebfa-b64e-42de-9f73-e9d0a42bd562,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4048b529463ad1fb0683c6594684a110fd975fed9989d18a658918d36a3c69c\"" Mar 20 17:58:03.391590 containerd[1563]: time="2025-03-20T17:58:03.391521453Z" level=info msg="CreateContainer within sandbox \"d4048b529463ad1fb0683c6594684a110fd975fed9989d18a658918d36a3c69c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 17:58:03.403200 containerd[1563]: time="2025-03-20T17:58:03.403076497Z" level=info msg="Container 03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:58:03.403297 containerd[1563]: time="2025-03-20T17:58:03.403282996Z" level=info msg="Container 1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:58:03.406451 containerd[1563]: time="2025-03-20T17:58:03.406408699Z" level=info msg="CreateContainer within sandbox \"b0dcff95b60cba3f7f9b2637f2c2f7caeadedbe1b16d03f94d105df184bc4930\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256\"" Mar 20 17:58:03.406801 containerd[1563]: time="2025-03-20T17:58:03.406763949Z" level=info msg="StartContainer for \"1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256\"" Mar 20 17:58:03.407822 containerd[1563]: time="2025-03-20T17:58:03.407810394Z" level=info msg="connecting to shim 1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256" address="unix:///run/containerd/s/1f7fb308c51c8379ac9a91c3e9d7129ddcbc3b658fb732338d12e670fbec4556" protocol=ttrpc version=3 Mar 20 17:58:03.411360 containerd[1563]: time="2025-03-20T17:58:03.411169739Z" level=info msg="CreateContainer within sandbox \"d4048b529463ad1fb0683c6594684a110fd975fed9989d18a658918d36a3c69c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6\"" Mar 20 17:58:03.412399 containerd[1563]: time="2025-03-20T17:58:03.411634744Z" level=info msg="StartContainer for \"03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6\"" Mar 20 17:58:03.412908 containerd[1563]: time="2025-03-20T17:58:03.412786797Z" level=info msg="connecting to shim 03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6" address="unix:///run/containerd/s/2bbdf3550c277f1ed782b0d7669bcfd37781fabdc4b0a0cb3694754e17ba6935" protocol=ttrpc version=3 Mar 20 17:58:03.426038 systemd[1]: Started cri-containerd-1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256.scope - libcontainer container 1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256. Mar 20 17:58:03.428906 systemd[1]: Started cri-containerd-03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6.scope - libcontainer container 03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6. Mar 20 17:58:03.453907 containerd[1563]: time="2025-03-20T17:58:03.453885009Z" level=info msg="StartContainer for \"03f0f49e284685a5857e927588da23a58dc4127f358c62cbb3b4a2133f7a35c6\" returns successfully" Mar 20 17:58:03.455176 containerd[1563]: time="2025-03-20T17:58:03.455133365Z" level=info msg="StartContainer for \"1c9f3c30ce690e28b2040c2d9ea9288ca8c906342e33964b467beccf8b69f256\" returns successfully" Mar 20 17:58:04.268816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286253092.mount: Deactivated successfully. Mar 20 17:58:04.281812 kubelet[2923]: I0320 17:58:04.280880 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c8h8k" podStartSLOduration=19.28086426 podStartE2EDuration="19.28086426s" podCreationTimestamp="2025-03-20 17:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:58:04.280217848 +0000 UTC m=+34.275133281" watchObservedRunningTime="2025-03-20 17:58:04.28086426 +0000 UTC m=+34.275779698" Mar 20 17:58:04.298218 kubelet[2923]: I0320 17:58:04.297856 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5n5l4" podStartSLOduration=19.297843784 podStartE2EDuration="19.297843784s" podCreationTimestamp="2025-03-20 17:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:58:04.297711634 +0000 UTC m=+34.292627067" watchObservedRunningTime="2025-03-20 17:58:04.297843784 +0000 UTC m=+34.292759217" Mar 20 17:58:33.332318 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.68.195:48952.service - OpenSSH per-connection server daemon (139.178.68.195:48952). Mar 20 17:58:33.547899 sshd[4233]: Accepted publickey for core from 139.178.68.195 port 48952 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:33.553139 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:33.559261 systemd-logind[1540]: New session 10 of user core. Mar 20 17:58:33.567077 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 17:58:35.007152 sshd[4235]: Connection closed by 139.178.68.195 port 48952 Mar 20 17:58:35.007636 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:35.010486 systemd[1]: sshd@7-139.178.70.103:22-139.178.68.195:48952.service: Deactivated successfully. Mar 20 17:58:35.011916 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 17:58:35.012553 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Mar 20 17:58:35.014134 systemd-logind[1540]: Removed session 10. Mar 20 17:58:40.022550 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.68.195:51686.service - OpenSSH per-connection server daemon (139.178.68.195:51686). Mar 20 17:58:40.303604 sshd[4249]: Accepted publickey for core from 139.178.68.195 port 51686 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:40.304567 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:40.308457 systemd-logind[1540]: New session 11 of user core. Mar 20 17:58:40.315124 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 17:58:40.533516 sshd[4251]: Connection closed by 139.178.68.195 port 51686 Mar 20 17:58:40.533861 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:40.536289 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Mar 20 17:58:40.536531 systemd[1]: sshd@8-139.178.70.103:22-139.178.68.195:51686.service: Deactivated successfully. Mar 20 17:58:40.537849 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 17:58:40.538434 systemd-logind[1540]: Removed session 11. Mar 20 17:58:45.543843 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.68.195:51702.service - OpenSSH per-connection server daemon (139.178.68.195:51702). Mar 20 17:58:45.588167 sshd[4264]: Accepted publickey for core from 139.178.68.195 port 51702 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:45.589536 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:45.594004 systemd-logind[1540]: New session 12 of user core. Mar 20 17:58:45.599064 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 17:58:45.704975 sshd[4266]: Connection closed by 139.178.68.195 port 51702 Mar 20 17:58:45.705379 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:45.707461 systemd[1]: sshd@9-139.178.70.103:22-139.178.68.195:51702.service: Deactivated successfully. Mar 20 17:58:45.708970 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 17:58:45.710160 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Mar 20 17:58:45.711003 systemd-logind[1540]: Removed session 12. Mar 20 17:58:50.714615 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.68.195:55690.service - OpenSSH per-connection server daemon (139.178.68.195:55690). Mar 20 17:58:50.753801 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 55690 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:50.754527 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:50.757894 systemd-logind[1540]: New session 13 of user core. Mar 20 17:58:50.766023 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 17:58:50.854267 sshd[4283]: Connection closed by 139.178.68.195 port 55690 Mar 20 17:58:50.854695 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:50.864070 systemd[1]: sshd@10-139.178.70.103:22-139.178.68.195:55690.service: Deactivated successfully. Mar 20 17:58:50.865096 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 17:58:50.865912 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Mar 20 17:58:50.867218 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.68.195:55696.service - OpenSSH per-connection server daemon (139.178.68.195:55696). Mar 20 17:58:50.868116 systemd-logind[1540]: Removed session 13. Mar 20 17:58:50.901396 sshd[4294]: Accepted publickey for core from 139.178.68.195 port 55696 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:50.902081 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:50.904823 systemd-logind[1540]: New session 14 of user core. Mar 20 17:58:50.913038 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 17:58:51.108508 sshd[4297]: Connection closed by 139.178.68.195 port 55696 Mar 20 17:58:51.110712 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:51.118646 systemd[1]: sshd@11-139.178.70.103:22-139.178.68.195:55696.service: Deactivated successfully. Mar 20 17:58:51.120484 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 17:58:51.122146 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Mar 20 17:58:51.124319 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.68.195:55702.service - OpenSSH per-connection server daemon (139.178.68.195:55702). Mar 20 17:58:51.126770 systemd-logind[1540]: Removed session 14. Mar 20 17:58:51.260120 sshd[4306]: Accepted publickey for core from 139.178.68.195 port 55702 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:51.261149 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:51.264877 systemd-logind[1540]: New session 15 of user core. Mar 20 17:58:51.272127 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 17:58:51.419338 sshd[4309]: Connection closed by 139.178.68.195 port 55702 Mar 20 17:58:51.419817 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:51.421685 systemd[1]: sshd@12-139.178.70.103:22-139.178.68.195:55702.service: Deactivated successfully. Mar 20 17:58:51.422935 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 17:58:51.424004 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Mar 20 17:58:51.424738 systemd-logind[1540]: Removed session 15. Mar 20 17:58:56.433684 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.68.195:37380.service - OpenSSH per-connection server daemon (139.178.68.195:37380). Mar 20 17:58:56.471113 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 37380 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:58:56.471986 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:58:56.474527 systemd-logind[1540]: New session 16 of user core. Mar 20 17:58:56.485191 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 17:58:56.573762 sshd[4323]: Connection closed by 139.178.68.195 port 37380 Mar 20 17:58:56.574302 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Mar 20 17:58:56.576100 systemd[1]: sshd@13-139.178.70.103:22-139.178.68.195:37380.service: Deactivated successfully. Mar 20 17:58:56.577534 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 17:58:56.578861 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Mar 20 17:58:56.579702 systemd-logind[1540]: Removed session 16. Mar 20 17:59:01.586328 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.68.195:37382.service - OpenSSH per-connection server daemon (139.178.68.195:37382). Mar 20 17:59:01.626361 sshd[4335]: Accepted publickey for core from 139.178.68.195 port 37382 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:01.627204 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:01.630614 systemd-logind[1540]: New session 17 of user core. Mar 20 17:59:01.642274 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 17:59:01.737386 sshd[4337]: Connection closed by 139.178.68.195 port 37382 Mar 20 17:59:01.737998 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:01.744684 systemd[1]: sshd@14-139.178.70.103:22-139.178.68.195:37382.service: Deactivated successfully. Mar 20 17:59:01.746157 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 17:59:01.747351 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Mar 20 17:59:01.748730 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.68.195:37398.service - OpenSSH per-connection server daemon (139.178.68.195:37398). Mar 20 17:59:01.750379 systemd-logind[1540]: Removed session 17. Mar 20 17:59:01.785663 sshd[4347]: Accepted publickey for core from 139.178.68.195 port 37398 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:01.786860 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:01.791369 systemd-logind[1540]: New session 18 of user core. Mar 20 17:59:01.801192 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 17:59:02.565342 sshd[4350]: Connection closed by 139.178.68.195 port 37398 Mar 20 17:59:02.569628 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:02.579217 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.68.195:37400.service - OpenSSH per-connection server daemon (139.178.68.195:37400). Mar 20 17:59:02.581710 systemd[1]: sshd@15-139.178.70.103:22-139.178.68.195:37398.service: Deactivated successfully. Mar 20 17:59:02.584353 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 17:59:02.585597 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Mar 20 17:59:02.587156 systemd-logind[1540]: Removed session 18. Mar 20 17:59:02.689804 sshd[4357]: Accepted publickey for core from 139.178.68.195 port 37400 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:02.691184 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:02.698646 systemd-logind[1540]: New session 19 of user core. Mar 20 17:59:02.707100 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 17:59:05.016374 sshd[4362]: Connection closed by 139.178.68.195 port 37400 Mar 20 17:59:05.017020 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:05.036309 systemd[1]: sshd@16-139.178.70.103:22-139.178.68.195:37400.service: Deactivated successfully. Mar 20 17:59:05.039537 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 17:59:05.041640 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Mar 20 17:59:05.044723 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.68.195:37416.service - OpenSSH per-connection server daemon (139.178.68.195:37416). Mar 20 17:59:05.046139 systemd-logind[1540]: Removed session 19. Mar 20 17:59:05.096743 sshd[4379]: Accepted publickey for core from 139.178.68.195 port 37416 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:05.098163 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:05.103072 systemd-logind[1540]: New session 20 of user core. Mar 20 17:59:05.108121 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 17:59:05.640823 sshd[4382]: Connection closed by 139.178.68.195 port 37416 Mar 20 17:59:05.641435 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:05.653721 systemd[1]: sshd@17-139.178.70.103:22-139.178.68.195:37416.service: Deactivated successfully. Mar 20 17:59:05.656933 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 17:59:05.658578 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Mar 20 17:59:05.660563 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.68.195:59918.service - OpenSSH per-connection server daemon (139.178.68.195:59918). Mar 20 17:59:05.663292 systemd-logind[1540]: Removed session 20. Mar 20 17:59:05.702424 sshd[4391]: Accepted publickey for core from 139.178.68.195 port 59918 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:05.703992 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:05.707426 systemd-logind[1540]: New session 21 of user core. Mar 20 17:59:05.712066 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 17:59:05.830351 sshd[4394]: Connection closed by 139.178.68.195 port 59918 Mar 20 17:59:05.830791 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:05.833336 systemd[1]: sshd@18-139.178.70.103:22-139.178.68.195:59918.service: Deactivated successfully. Mar 20 17:59:05.834573 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 17:59:05.835189 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Mar 20 17:59:05.836047 systemd-logind[1540]: Removed session 21. Mar 20 17:59:10.841382 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.68.195:59934.service - OpenSSH per-connection server daemon (139.178.68.195:59934). Mar 20 17:59:10.875768 sshd[4405]: Accepted publickey for core from 139.178.68.195 port 59934 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:10.876611 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:10.880587 systemd-logind[1540]: New session 22 of user core. Mar 20 17:59:10.889106 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 17:59:11.021687 sshd[4407]: Connection closed by 139.178.68.195 port 59934 Mar 20 17:59:11.024602 systemd[1]: sshd@19-139.178.70.103:22-139.178.68.195:59934.service: Deactivated successfully. Mar 20 17:59:11.022287 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:11.025912 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 17:59:11.027447 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Mar 20 17:59:11.028636 systemd-logind[1540]: Removed session 22. Mar 20 17:59:16.033450 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.68.195:37728.service - OpenSSH per-connection server daemon (139.178.68.195:37728). Mar 20 17:59:16.070673 sshd[4422]: Accepted publickey for core from 139.178.68.195 port 37728 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:16.071567 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:16.074622 systemd-logind[1540]: New session 23 of user core. Mar 20 17:59:16.082100 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 17:59:16.178987 sshd[4424]: Connection closed by 139.178.68.195 port 37728 Mar 20 17:59:16.179336 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:16.181490 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Mar 20 17:59:16.181680 systemd[1]: sshd@20-139.178.70.103:22-139.178.68.195:37728.service: Deactivated successfully. Mar 20 17:59:16.182863 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 17:59:16.183565 systemd-logind[1540]: Removed session 23. Mar 20 17:59:21.188909 systemd[1]: Started sshd@21-139.178.70.103:22-139.178.68.195:37736.service - OpenSSH per-connection server daemon (139.178.68.195:37736). Mar 20 17:59:21.229983 sshd[4438]: Accepted publickey for core from 139.178.68.195 port 37736 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:21.231259 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:21.234262 systemd-logind[1540]: New session 24 of user core. Mar 20 17:59:21.238032 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 17:59:21.388968 sshd[4440]: Connection closed by 139.178.68.195 port 37736 Mar 20 17:59:21.389425 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:21.391984 systemd[1]: sshd@21-139.178.70.103:22-139.178.68.195:37736.service: Deactivated successfully. Mar 20 17:59:21.393235 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 17:59:21.393736 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Mar 20 17:59:21.394372 systemd-logind[1540]: Removed session 24. Mar 20 17:59:26.399058 systemd[1]: Started sshd@22-139.178.70.103:22-139.178.68.195:51860.service - OpenSSH per-connection server daemon (139.178.68.195:51860). Mar 20 17:59:26.447417 sshd[4452]: Accepted publickey for core from 139.178.68.195 port 51860 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:26.448146 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:26.451663 systemd-logind[1540]: New session 25 of user core. Mar 20 17:59:26.459027 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 17:59:26.567495 sshd[4454]: Connection closed by 139.178.68.195 port 51860 Mar 20 17:59:26.568357 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:26.574112 systemd[1]: sshd@22-139.178.70.103:22-139.178.68.195:51860.service: Deactivated successfully. Mar 20 17:59:26.575006 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 17:59:26.575782 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Mar 20 17:59:26.576623 systemd[1]: Started sshd@23-139.178.70.103:22-139.178.68.195:51862.service - OpenSSH per-connection server daemon (139.178.68.195:51862). Mar 20 17:59:26.577272 systemd-logind[1540]: Removed session 25. Mar 20 17:59:26.613156 sshd[4465]: Accepted publickey for core from 139.178.68.195 port 51862 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:26.613888 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:26.616574 systemd-logind[1540]: New session 26 of user core. Mar 20 17:59:26.627026 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 20 17:59:28.055268 containerd[1563]: time="2025-03-20T17:59:28.055056826Z" level=info msg="StopContainer for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" with timeout 30 (s)" Mar 20 17:59:28.058091 containerd[1563]: time="2025-03-20T17:59:28.057793843Z" level=info msg="Stop container \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" with signal terminated" Mar 20 17:59:28.071334 systemd[1]: cri-containerd-2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd.scope: Deactivated successfully. Mar 20 17:59:28.073829 containerd[1563]: time="2025-03-20T17:59:28.073798092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" id:\"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" pid:3506 exited_at:{seconds:1742493568 nanos:73380844}" Mar 20 17:59:28.073829 containerd[1563]: time="2025-03-20T17:59:28.073810853Z" level=info msg="received exit event container_id:\"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" id:\"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" pid:3506 exited_at:{seconds:1742493568 nanos:73380844}" Mar 20 17:59:28.088374 containerd[1563]: time="2025-03-20T17:59:28.088197503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" id:\"ab08b099c8b67e8aa24492ee075c729381e3c1fd4d118497df9763c49a4ee1f6\" pid:4495 exited_at:{seconds:1742493568 nanos:87706061}" Mar 20 17:59:28.091111 containerd[1563]: time="2025-03-20T17:59:28.091088867Z" level=info msg="StopContainer for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" with timeout 2 (s)" Mar 20 17:59:28.093313 containerd[1563]: time="2025-03-20T17:59:28.091541393Z" level=info msg="Stop container \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" with signal terminated" Mar 20 17:59:28.093161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd-rootfs.mount: Deactivated successfully. Mar 20 17:59:28.098038 containerd[1563]: time="2025-03-20T17:59:28.097769571Z" level=info msg="StopContainer for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" returns successfully" Mar 20 17:59:28.098326 containerd[1563]: time="2025-03-20T17:59:28.098311575Z" level=info msg="StopPodSandbox for \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\"" Mar 20 17:59:28.098368 containerd[1563]: time="2025-03-20T17:59:28.098358378Z" level=info msg="Container to stop \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:59:28.098661 containerd[1563]: time="2025-03-20T17:59:28.098505116Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 17:59:28.102196 systemd-networkd[1463]: lxc_health: Link DOWN Mar 20 17:59:28.102466 systemd-networkd[1463]: lxc_health: Lost carrier Mar 20 17:59:28.105802 systemd[1]: cri-containerd-42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812.scope: Deactivated successfully. Mar 20 17:59:28.107881 containerd[1563]: time="2025-03-20T17:59:28.107505949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" id:\"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" pid:3072 exit_status:137 exited_at:{seconds:1742493568 nanos:107210320}" Mar 20 17:59:28.118204 systemd[1]: cri-containerd-c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b.scope: Deactivated successfully. Mar 20 17:59:28.118397 systemd[1]: cri-containerd-c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b.scope: Consumed 4.551s CPU time, 198.2M memory peak, 71.6M read from disk, 13.3M written to disk. Mar 20 17:59:28.119975 containerd[1563]: time="2025-03-20T17:59:28.119956258Z" level=info msg="received exit event container_id:\"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" id:\"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" pid:3539 exited_at:{seconds:1742493568 nanos:119618130}" Mar 20 17:59:28.129912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812-rootfs.mount: Deactivated successfully. Mar 20 17:59:28.132904 containerd[1563]: time="2025-03-20T17:59:28.132792034Z" level=info msg="received exit event sandbox_id:\"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" exit_status:137 exited_at:{seconds:1742493568 nanos:107210320}" Mar 20 17:59:28.134638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812-shm.mount: Deactivated successfully. Mar 20 17:59:28.136101 containerd[1563]: time="2025-03-20T17:59:28.134629350Z" level=info msg="shim disconnected" id=42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812 namespace=k8s.io Mar 20 17:59:28.136101 containerd[1563]: time="2025-03-20T17:59:28.134654857Z" level=warning msg="cleaning up after shim disconnected" id=42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812 namespace=k8s.io Mar 20 17:59:28.138601 containerd[1563]: time="2025-03-20T17:59:28.134664189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 17:59:28.138636 containerd[1563]: time="2025-03-20T17:59:28.136172965Z" level=info msg="TearDown network for sandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" successfully" Mar 20 17:59:28.138636 containerd[1563]: time="2025-03-20T17:59:28.138630729Z" level=info msg="StopPodSandbox for \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" returns successfully" Mar 20 17:59:28.147074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b-rootfs.mount: Deactivated successfully. Mar 20 17:59:28.149943 containerd[1563]: time="2025-03-20T17:59:28.149757879Z" level=info msg="StopContainer for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" returns successfully" Mar 20 17:59:28.150088 containerd[1563]: time="2025-03-20T17:59:28.150046084Z" level=info msg="StopPodSandbox for \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\"" Mar 20 17:59:28.150127 containerd[1563]: time="2025-03-20T17:59:28.150095680Z" level=info msg="Container to stop \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:59:28.150127 containerd[1563]: time="2025-03-20T17:59:28.150105102Z" level=info msg="Container to stop \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:59:28.150127 containerd[1563]: time="2025-03-20T17:59:28.150109872Z" level=info msg="Container to stop \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:59:28.150127 containerd[1563]: time="2025-03-20T17:59:28.150117819Z" level=info msg="Container to stop \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:59:28.150655 containerd[1563]: time="2025-03-20T17:59:28.150123273Z" level=info msg="Container to stop \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:59:28.157634 systemd[1]: cri-containerd-48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af.scope: Deactivated successfully. Mar 20 17:59:28.168107 containerd[1563]: time="2025-03-20T17:59:28.168009473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" id:\"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" pid:3539 exited_at:{seconds:1742493568 nanos:119618130}" Mar 20 17:59:28.168107 containerd[1563]: time="2025-03-20T17:59:28.168051500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" id:\"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" pid:3106 exit_status:137 exited_at:{seconds:1742493568 nanos:157453652}" Mar 20 17:59:28.177577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af-rootfs.mount: Deactivated successfully. Mar 20 17:59:28.179200 containerd[1563]: time="2025-03-20T17:59:28.179033123Z" level=info msg="shim disconnected" id=48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af namespace=k8s.io Mar 20 17:59:28.179200 containerd[1563]: time="2025-03-20T17:59:28.179060003Z" level=warning msg="cleaning up after shim disconnected" id=48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af namespace=k8s.io Mar 20 17:59:28.179200 containerd[1563]: time="2025-03-20T17:59:28.179065703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 17:59:28.179694 containerd[1563]: time="2025-03-20T17:59:28.179443677Z" level=info msg="received exit event sandbox_id:\"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" exit_status:137 exited_at:{seconds:1742493568 nanos:157453652}" Mar 20 17:59:28.180405 containerd[1563]: time="2025-03-20T17:59:28.179610078Z" level=info msg="TearDown network for sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" successfully" Mar 20 17:59:28.180405 containerd[1563]: time="2025-03-20T17:59:28.180403473Z" level=info msg="StopPodSandbox for \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" returns successfully" Mar 20 17:59:28.222215 kubelet[2923]: I0320 17:59:28.222176 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-net\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224124 kubelet[2923]: I0320 17:59:28.222224 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-xtables-lock\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224124 kubelet[2923]: I0320 17:59:28.222240 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-cilium-config-path\") pod \"c5d9a572-4e10-4b9a-8edf-8113cba4ece8\" (UID: \"c5d9a572-4e10-4b9a-8edf-8113cba4ece8\") " Mar 20 17:59:28.224124 kubelet[2923]: I0320 17:59:28.222252 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-config-path\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224124 kubelet[2923]: I0320 17:59:28.222261 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-etc-cni-netd\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224124 kubelet[2923]: I0320 17:59:28.222270 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-run\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224124 kubelet[2923]: I0320 17:59:28.222283 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cni-path\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224241 kubelet[2923]: I0320 17:59:28.222292 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-cgroup\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224241 kubelet[2923]: I0320 17:59:28.222302 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-clustermesh-secrets\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224241 kubelet[2923]: I0320 17:59:28.222312 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm6ch\" (UniqueName: \"kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-kube-api-access-bm6ch\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224241 kubelet[2923]: I0320 17:59:28.222325 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwrp5\" (UniqueName: \"kubernetes.io/projected/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-kube-api-access-vwrp5\") pod \"c5d9a572-4e10-4b9a-8edf-8113cba4ece8\" (UID: \"c5d9a572-4e10-4b9a-8edf-8113cba4ece8\") " Mar 20 17:59:28.224241 kubelet[2923]: I0320 17:59:28.222334 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-kernel\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224241 kubelet[2923]: I0320 17:59:28.222342 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-bpf-maps\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224345 kubelet[2923]: I0320 17:59:28.222351 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-lib-modules\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224345 kubelet[2923]: I0320 17:59:28.222359 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hubble-tls\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224345 kubelet[2923]: I0320 17:59:28.222367 2923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hostproc\") pod \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\" (UID: \"4ff71a17-d60c-4aa8-b527-c5a5a9108b50\") " Mar 20 17:59:28.224345 kubelet[2923]: I0320 17:59:28.222400 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hostproc" (OuterVolumeSpecName: "hostproc") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.224345 kubelet[2923]: I0320 17:59:28.223404 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.224428 kubelet[2923]: I0320 17:59:28.221792 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.224428 kubelet[2923]: I0320 17:59:28.223421 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.233957 kubelet[2923]: I0320 17:59:28.232636 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5d9a572-4e10-4b9a-8edf-8113cba4ece8" (UID: "c5d9a572-4e10-4b9a-8edf-8113cba4ece8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:59:28.235810 kubelet[2923]: I0320 17:59:28.235563 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:59:28.235953 kubelet[2923]: I0320 17:59:28.235873 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.236087 kubelet[2923]: I0320 17:59:28.235999 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.236136 kubelet[2923]: I0320 17:59:28.236128 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cni-path" (OuterVolumeSpecName: "cni-path") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.236391 kubelet[2923]: I0320 17:59:28.236301 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:59:28.237124 kubelet[2923]: I0320 17:59:28.236444 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.237304 kubelet[2923]: I0320 17:59:28.237209 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-kube-api-access-bm6ch" (OuterVolumeSpecName: "kube-api-access-bm6ch") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "kube-api-access-bm6ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:59:28.237304 kubelet[2923]: I0320 17:59:28.237235 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.240189 kubelet[2923]: I0320 17:59:28.240172 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:59:28.240237 kubelet[2923]: I0320 17:59:28.240195 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4ff71a17-d60c-4aa8-b527-c5a5a9108b50" (UID: "4ff71a17-d60c-4aa8-b527-c5a5a9108b50"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:59:28.240310 kubelet[2923]: I0320 17:59:28.240298 2923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-kube-api-access-vwrp5" (OuterVolumeSpecName: "kube-api-access-vwrp5") pod "c5d9a572-4e10-4b9a-8edf-8113cba4ece8" (UID: "c5d9a572-4e10-4b9a-8edf-8113cba4ece8"). InnerVolumeSpecName "kube-api-access-vwrp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:59:28.324271 kubelet[2923]: I0320 17:59:28.323528 2923 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324271 kubelet[2923]: I0320 17:59:28.324130 2923 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324271 kubelet[2923]: I0320 17:59:28.324140 2923 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324403 kubelet[2923]: I0320 17:59:28.324337 2923 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324403 kubelet[2923]: I0320 17:59:28.324357 2923 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324403 kubelet[2923]: I0320 17:59:28.324364 2923 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324457 kubelet[2923]: I0320 17:59:28.324413 2923 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bm6ch\" (UniqueName: \"kubernetes.io/projected/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-kube-api-access-bm6ch\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324457 kubelet[2923]: I0320 17:59:28.324426 2923 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vwrp5\" (UniqueName: \"kubernetes.io/projected/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-kube-api-access-vwrp5\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.324457 kubelet[2923]: I0320 17:59:28.324432 2923 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324523 2923 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324533 2923 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5d9a572-4e10-4b9a-8edf-8113cba4ece8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324542 2923 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324546 2923 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324631 2923 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324637 2923 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.326329 kubelet[2923]: I0320 17:59:28.324644 2923 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ff71a17-d60c-4aa8-b527-c5a5a9108b50-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 17:59:28.464117 systemd[1]: Removed slice kubepods-besteffort-podc5d9a572_4e10_4b9a_8edf_8113cba4ece8.slice - libcontainer container kubepods-besteffort-podc5d9a572_4e10_4b9a_8edf_8113cba4ece8.slice. Mar 20 17:59:28.480144 kubelet[2923]: I0320 17:59:28.480117 2923 scope.go:117] "RemoveContainer" containerID="2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd" Mar 20 17:59:28.482835 containerd[1563]: time="2025-03-20T17:59:28.482781225Z" level=info msg="RemoveContainer for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\"" Mar 20 17:59:28.488537 systemd[1]: Removed slice kubepods-burstable-pod4ff71a17_d60c_4aa8_b527_c5a5a9108b50.slice - libcontainer container kubepods-burstable-pod4ff71a17_d60c_4aa8_b527_c5a5a9108b50.slice. Mar 20 17:59:28.488623 systemd[1]: kubepods-burstable-pod4ff71a17_d60c_4aa8_b527_c5a5a9108b50.slice: Consumed 4.607s CPU time, 199.4M memory peak, 72.7M read from disk, 13.3M written to disk. Mar 20 17:59:28.498822 containerd[1563]: time="2025-03-20T17:59:28.498796577Z" level=info msg="RemoveContainer for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" returns successfully" Mar 20 17:59:28.499050 kubelet[2923]: I0320 17:59:28.499033 2923 scope.go:117] "RemoveContainer" containerID="2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd" Mar 20 17:59:28.504247 containerd[1563]: time="2025-03-20T17:59:28.500074118Z" level=error msg="ContainerStatus for \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\": not found" Mar 20 17:59:28.514932 kubelet[2923]: E0320 17:59:28.514901 2923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\": not found" containerID="2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd" Mar 20 17:59:28.515054 kubelet[2923]: I0320 17:59:28.514945 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd"} err="failed to get container status \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a05a4f6c32b580f58b75226fa56900680c17d10d4b318f8438912c9b1d1eebd\": not found" Mar 20 17:59:28.515054 kubelet[2923]: I0320 17:59:28.515000 2923 scope.go:117] "RemoveContainer" containerID="c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b" Mar 20 17:59:28.516129 containerd[1563]: time="2025-03-20T17:59:28.516112324Z" level=info msg="RemoveContainer for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\"" Mar 20 17:59:28.525355 containerd[1563]: time="2025-03-20T17:59:28.525333416Z" level=info msg="RemoveContainer for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" returns successfully" Mar 20 17:59:28.525570 kubelet[2923]: I0320 17:59:28.525553 2923 scope.go:117] "RemoveContainer" containerID="6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7" Mar 20 17:59:28.526314 containerd[1563]: time="2025-03-20T17:59:28.526274787Z" level=info msg="RemoveContainer for \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\"" Mar 20 17:59:28.528126 containerd[1563]: time="2025-03-20T17:59:28.528090064Z" level=info msg="RemoveContainer for \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" returns successfully" Mar 20 17:59:28.528178 kubelet[2923]: I0320 17:59:28.528152 2923 scope.go:117] "RemoveContainer" containerID="76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb" Mar 20 17:59:28.529351 containerd[1563]: time="2025-03-20T17:59:28.529314457Z" level=info msg="RemoveContainer for \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\"" Mar 20 17:59:28.531139 containerd[1563]: time="2025-03-20T17:59:28.531102155Z" level=info msg="RemoveContainer for \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" returns successfully" Mar 20 17:59:28.531192 kubelet[2923]: I0320 17:59:28.531183 2923 scope.go:117] "RemoveContainer" containerID="8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d" Mar 20 17:59:28.532352 containerd[1563]: time="2025-03-20T17:59:28.532004182Z" level=info msg="RemoveContainer for \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\"" Mar 20 17:59:28.533378 containerd[1563]: time="2025-03-20T17:59:28.533365700Z" level=info msg="RemoveContainer for \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" returns successfully" Mar 20 17:59:28.533545 kubelet[2923]: I0320 17:59:28.533530 2923 scope.go:117] "RemoveContainer" containerID="17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd" Mar 20 17:59:28.534411 containerd[1563]: time="2025-03-20T17:59:28.534279781Z" level=info msg="RemoveContainer for \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\"" Mar 20 17:59:28.541776 containerd[1563]: time="2025-03-20T17:59:28.541755856Z" level=info msg="RemoveContainer for \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" returns successfully" Mar 20 17:59:28.541982 kubelet[2923]: I0320 17:59:28.541969 2923 scope.go:117] "RemoveContainer" containerID="c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b" Mar 20 17:59:28.542339 containerd[1563]: time="2025-03-20T17:59:28.542185393Z" level=error msg="ContainerStatus for \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\": not found" Mar 20 17:59:28.542376 kubelet[2923]: E0320 17:59:28.542267 2923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\": not found" containerID="c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b" Mar 20 17:59:28.542376 kubelet[2923]: I0320 17:59:28.542282 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b"} err="failed to get container status \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1d8e11af530d0cbbcccf1a6c44733b283d70d07784196632a07bec09fcbf28b\": not found" Mar 20 17:59:28.542376 kubelet[2923]: I0320 17:59:28.542295 2923 scope.go:117] "RemoveContainer" containerID="6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7" Mar 20 17:59:28.542799 containerd[1563]: time="2025-03-20T17:59:28.542574165Z" level=error msg="ContainerStatus for \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\": not found" Mar 20 17:59:28.542799 containerd[1563]: time="2025-03-20T17:59:28.542769611Z" level=error msg="ContainerStatus for \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\": not found" Mar 20 17:59:28.542864 kubelet[2923]: E0320 17:59:28.542634 2923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\": not found" containerID="6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7" Mar 20 17:59:28.542864 kubelet[2923]: I0320 17:59:28.542650 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7"} err="failed to get container status \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dcd53191737926e71e3472a52c281497db06f129d6e6b920e5e412988086df7\": not found" Mar 20 17:59:28.542864 kubelet[2923]: I0320 17:59:28.542659 2923 scope.go:117] "RemoveContainer" containerID="76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb" Mar 20 17:59:28.542864 kubelet[2923]: E0320 17:59:28.542823 2923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\": not found" containerID="76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb" Mar 20 17:59:28.542864 kubelet[2923]: I0320 17:59:28.542836 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb"} err="failed to get container status \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"76d14481ea0c00339a9ae4f91658a56379594ab8235bb8d24350ac343562c4fb\": not found" Mar 20 17:59:28.542864 kubelet[2923]: I0320 17:59:28.542844 2923 scope.go:117] "RemoveContainer" containerID="8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d" Mar 20 17:59:28.543230 containerd[1563]: time="2025-03-20T17:59:28.543076388Z" level=error msg="ContainerStatus for \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\": not found" Mar 20 17:59:28.543261 kubelet[2923]: E0320 17:59:28.543136 2923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\": not found" containerID="8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d" Mar 20 17:59:28.543261 kubelet[2923]: I0320 17:59:28.543145 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d"} err="failed to get container status \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b81237a35802426740608e20c392debef18c31d141b975f24352bb8c01a713d\": not found" Mar 20 17:59:28.543261 kubelet[2923]: I0320 17:59:28.543152 2923 scope.go:117] "RemoveContainer" containerID="17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd" Mar 20 17:59:28.543440 kubelet[2923]: E0320 17:59:28.543400 2923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\": not found" containerID="17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd" Mar 20 17:59:28.543440 kubelet[2923]: I0320 17:59:28.543409 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd"} err="failed to get container status \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\": not found" Mar 20 17:59:28.543479 containerd[1563]: time="2025-03-20T17:59:28.543350669Z" level=error msg="ContainerStatus for \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17308cf28b2c2ebf6aabb3e66132c7eafc277b2657b804c7dac9b431215018bd\": not found" Mar 20 17:59:29.093100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af-shm.mount: Deactivated successfully. Mar 20 17:59:29.093193 systemd[1]: var-lib-kubelet-pods-c5d9a572\x2d4e10\x2d4b9a\x2d8edf\x2d8113cba4ece8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvwrp5.mount: Deactivated successfully. Mar 20 17:59:29.093265 systemd[1]: var-lib-kubelet-pods-4ff71a17\x2dd60c\x2d4aa8\x2db527\x2dc5a5a9108b50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbm6ch.mount: Deactivated successfully. Mar 20 17:59:29.093324 systemd[1]: var-lib-kubelet-pods-4ff71a17\x2dd60c\x2d4aa8\x2db527\x2dc5a5a9108b50-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 17:59:29.093384 systemd[1]: var-lib-kubelet-pods-4ff71a17\x2dd60c\x2d4aa8\x2db527\x2dc5a5a9108b50-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 17:59:30.010667 sshd[4468]: Connection closed by 139.178.68.195 port 51862 Mar 20 17:59:30.010920 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:30.018175 systemd[1]: sshd@23-139.178.70.103:22-139.178.68.195:51862.service: Deactivated successfully. Mar 20 17:59:30.019191 systemd[1]: session-26.scope: Deactivated successfully. Mar 20 17:59:30.020052 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Mar 20 17:59:30.021106 systemd[1]: Started sshd@24-139.178.70.103:22-139.178.68.195:51872.service - OpenSSH per-connection server daemon (139.178.68.195:51872). Mar 20 17:59:30.023022 systemd-logind[1540]: Removed session 26. Mar 20 17:59:30.082360 sshd[4617]: Accepted publickey for core from 139.178.68.195 port 51872 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:30.083263 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:30.087735 systemd-logind[1540]: New session 27 of user core. Mar 20 17:59:30.088575 kubelet[2923]: I0320 17:59:30.088553 2923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" path="/var/lib/kubelet/pods/4ff71a17-d60c-4aa8-b527-c5a5a9108b50/volumes" Mar 20 17:59:30.092108 kubelet[2923]: I0320 17:59:30.088892 2923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d9a572-4e10-4b9a-8edf-8113cba4ece8" path="/var/lib/kubelet/pods/c5d9a572-4e10-4b9a-8edf-8113cba4ece8/volumes" Mar 20 17:59:30.092136 containerd[1563]: time="2025-03-20T17:59:30.089493282Z" level=info msg="StopPodSandbox for \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\"" Mar 20 17:59:30.092136 containerd[1563]: time="2025-03-20T17:59:30.089619559Z" level=info msg="TearDown network for sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" successfully" Mar 20 17:59:30.092136 containerd[1563]: time="2025-03-20T17:59:30.089628703Z" level=info msg="StopPodSandbox for \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" returns successfully" Mar 20 17:59:30.092136 containerd[1563]: time="2025-03-20T17:59:30.089852691Z" level=info msg="RemovePodSandbox for \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\"" Mar 20 17:59:30.092136 containerd[1563]: time="2025-03-20T17:59:30.091265177Z" level=info msg="Forcibly stopping sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\"" Mar 20 17:59:30.092388 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 20 17:59:30.102397 containerd[1563]: time="2025-03-20T17:59:30.102373946Z" level=info msg="TearDown network for sandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" successfully" Mar 20 17:59:30.103701 containerd[1563]: time="2025-03-20T17:59:30.103685757Z" level=info msg="Ensure that sandbox 48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af in task-service has been cleanup successfully" Mar 20 17:59:30.122532 containerd[1563]: time="2025-03-20T17:59:30.122498357Z" level=info msg="RemovePodSandbox \"48e512c44fcd620a565acd277674b10c3b480753f0444f73073691ef869ee2af\" returns successfully" Mar 20 17:59:30.122837 containerd[1563]: time="2025-03-20T17:59:30.122823710Z" level=info msg="StopPodSandbox for \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\"" Mar 20 17:59:30.123055 containerd[1563]: time="2025-03-20T17:59:30.123001920Z" level=info msg="TearDown network for sandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" successfully" Mar 20 17:59:30.123055 containerd[1563]: time="2025-03-20T17:59:30.123012934Z" level=info msg="StopPodSandbox for \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" returns successfully" Mar 20 17:59:30.123315 containerd[1563]: time="2025-03-20T17:59:30.123229112Z" level=info msg="RemovePodSandbox for \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\"" Mar 20 17:59:30.123315 containerd[1563]: time="2025-03-20T17:59:30.123251418Z" level=info msg="Forcibly stopping sandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\"" Mar 20 17:59:30.123315 containerd[1563]: time="2025-03-20T17:59:30.123296278Z" level=info msg="TearDown network for sandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" successfully" Mar 20 17:59:30.124169 containerd[1563]: time="2025-03-20T17:59:30.123979896Z" level=info msg="Ensure that sandbox 42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812 in task-service has been cleanup successfully" Mar 20 17:59:30.141473 containerd[1563]: time="2025-03-20T17:59:30.141449486Z" level=info msg="RemovePodSandbox \"42602ada152ef9056178eac150e81cf00b805f772d3769e89a3f638d3b237812\" returns successfully" Mar 20 17:59:30.148906 kubelet[2923]: E0320 17:59:30.148876 2923 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 17:59:30.501207 sshd[4622]: Connection closed by 139.178.68.195 port 51872 Mar 20 17:59:30.501792 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:30.510805 systemd[1]: sshd@24-139.178.70.103:22-139.178.68.195:51872.service: Deactivated successfully. Mar 20 17:59:30.514910 systemd[1]: session-27.scope: Deactivated successfully. Mar 20 17:59:30.516890 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Mar 20 17:59:30.519785 systemd[1]: Started sshd@25-139.178.70.103:22-139.178.68.195:51878.service - OpenSSH per-connection server daemon (139.178.68.195:51878). Mar 20 17:59:30.524143 systemd-logind[1540]: Removed session 27. Mar 20 17:59:30.552969 kubelet[2923]: I0320 17:59:30.552926 2923 topology_manager.go:215] "Topology Admit Handler" podUID="615b5272-e43a-41d9-b6f0-05d9ac5e7f29" podNamespace="kube-system" podName="cilium-lqcqh" Mar 20 17:59:30.553128 kubelet[2923]: E0320 17:59:30.552995 2923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" containerName="apply-sysctl-overwrites" Mar 20 17:59:30.553128 kubelet[2923]: E0320 17:59:30.553003 2923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" containerName="mount-bpf-fs" Mar 20 17:59:30.553128 kubelet[2923]: E0320 17:59:30.553008 2923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5d9a572-4e10-4b9a-8edf-8113cba4ece8" containerName="cilium-operator" Mar 20 17:59:30.553128 kubelet[2923]: E0320 17:59:30.553012 2923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" containerName="mount-cgroup" Mar 20 17:59:30.553128 kubelet[2923]: E0320 17:59:30.553015 2923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" containerName="clean-cilium-state" Mar 20 17:59:30.553128 kubelet[2923]: E0320 17:59:30.553019 2923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" containerName="cilium-agent" Mar 20 17:59:30.553128 kubelet[2923]: I0320 17:59:30.553038 2923 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d9a572-4e10-4b9a-8edf-8113cba4ece8" containerName="cilium-operator" Mar 20 17:59:30.553128 kubelet[2923]: I0320 17:59:30.553043 2923 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff71a17-d60c-4aa8-b527-c5a5a9108b50" containerName="cilium-agent" Mar 20 17:59:30.573006 sshd[4631]: Accepted publickey for core from 139.178.68.195 port 51878 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:30.574354 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:30.579375 systemd[1]: Created slice kubepods-burstable-pod615b5272_e43a_41d9_b6f0_05d9ac5e7f29.slice - libcontainer container kubepods-burstable-pod615b5272_e43a_41d9_b6f0_05d9ac5e7f29.slice. Mar 20 17:59:30.590215 systemd-logind[1540]: New session 28 of user core. Mar 20 17:59:30.598121 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 20 17:59:30.638924 kubelet[2923]: I0320 17:59:30.638838 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-cilium-ipsec-secrets\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.638924 kubelet[2923]: I0320 17:59:30.638871 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-cni-path\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.638924 kubelet[2923]: I0320 17:59:30.638884 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-xtables-lock\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.638924 kubelet[2923]: I0320 17:59:30.638896 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-cilium-config-path\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.638924 kubelet[2923]: I0320 17:59:30.638906 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-hubble-tls\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639151 kubelet[2923]: I0320 17:59:30.638982 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-cilium-run\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639151 kubelet[2923]: I0320 17:59:30.639013 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-lib-modules\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639151 kubelet[2923]: I0320 17:59:30.639030 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25sk\" (UniqueName: \"kubernetes.io/projected/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-kube-api-access-w25sk\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639151 kubelet[2923]: I0320 17:59:30.639044 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-hostproc\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639151 kubelet[2923]: I0320 17:59:30.639053 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-host-proc-sys-net\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639151 kubelet[2923]: I0320 17:59:30.639064 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-cilium-cgroup\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639249 kubelet[2923]: I0320 17:59:30.639076 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-host-proc-sys-kernel\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639249 kubelet[2923]: I0320 17:59:30.639085 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-bpf-maps\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639249 kubelet[2923]: I0320 17:59:30.639094 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-etc-cni-netd\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.639249 kubelet[2923]: I0320 17:59:30.639103 2923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/615b5272-e43a-41d9-b6f0-05d9ac5e7f29-clustermesh-secrets\") pod \"cilium-lqcqh\" (UID: \"615b5272-e43a-41d9-b6f0-05d9ac5e7f29\") " pod="kube-system/cilium-lqcqh" Mar 20 17:59:30.651396 sshd[4634]: Connection closed by 139.178.68.195 port 51878 Mar 20 17:59:30.652601 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:30.658188 systemd[1]: sshd@25-139.178.70.103:22-139.178.68.195:51878.service: Deactivated successfully. Mar 20 17:59:30.659757 systemd[1]: session-28.scope: Deactivated successfully. Mar 20 17:59:30.660635 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. Mar 20 17:59:30.663917 systemd[1]: Started sshd@26-139.178.70.103:22-139.178.68.195:51890.service - OpenSSH per-connection server daemon (139.178.68.195:51890). Mar 20 17:59:30.664834 systemd-logind[1540]: Removed session 28. Mar 20 17:59:30.698222 sshd[4640]: Accepted publickey for core from 139.178.68.195 port 51890 ssh2: RSA SHA256:2bL7KMv6L66DM7WlnFmoSGWkbtnWPVxQN5k56nhXbOU Mar 20 17:59:30.699039 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:59:30.701827 systemd-logind[1540]: New session 29 of user core. Mar 20 17:59:30.711112 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 20 17:59:30.901554 containerd[1563]: time="2025-03-20T17:59:30.901450511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqcqh,Uid:615b5272-e43a-41d9-b6f0-05d9ac5e7f29,Namespace:kube-system,Attempt:0,}" Mar 20 17:59:30.913745 containerd[1563]: time="2025-03-20T17:59:30.913710216Z" level=info msg="connecting to shim 48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157" address="unix:///run/containerd/s/6c9ffd1bda1cca5866e8bb19a8f48bc466916a227ebdedcc9ee7c843d50b07a9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:59:30.933187 systemd[1]: Started cri-containerd-48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157.scope - libcontainer container 48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157. Mar 20 17:59:30.958399 containerd[1563]: time="2025-03-20T17:59:30.958285980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqcqh,Uid:615b5272-e43a-41d9-b6f0-05d9ac5e7f29,Namespace:kube-system,Attempt:0,} returns sandbox id \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\"" Mar 20 17:59:30.963137 containerd[1563]: time="2025-03-20T17:59:30.962990404Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 17:59:30.967096 containerd[1563]: time="2025-03-20T17:59:30.967051980Z" level=info msg="Container 90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:59:30.997903 containerd[1563]: time="2025-03-20T17:59:30.997853707Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\"" Mar 20 17:59:30.998574 containerd[1563]: time="2025-03-20T17:59:30.998553680Z" level=info msg="StartContainer for \"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\"" Mar 20 17:59:30.999454 containerd[1563]: time="2025-03-20T17:59:30.999363601Z" level=info msg="connecting to shim 90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b" address="unix:///run/containerd/s/6c9ffd1bda1cca5866e8bb19a8f48bc466916a227ebdedcc9ee7c843d50b07a9" protocol=ttrpc version=3 Mar 20 17:59:31.016576 systemd[1]: Started cri-containerd-90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b.scope - libcontainer container 90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b. Mar 20 17:59:31.038565 containerd[1563]: time="2025-03-20T17:59:31.038259490Z" level=info msg="StartContainer for \"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\" returns successfully" Mar 20 17:59:31.054027 systemd[1]: cri-containerd-90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b.scope: Deactivated successfully. Mar 20 17:59:31.054203 systemd[1]: cri-containerd-90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b.scope: Consumed 16ms CPU time, 9.3M memory peak, 2.8M read from disk. Mar 20 17:59:31.055073 containerd[1563]: time="2025-03-20T17:59:31.055047389Z" level=info msg="received exit event container_id:\"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\" id:\"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\" pid:4710 exited_at:{seconds:1742493571 nanos:54801313}" Mar 20 17:59:31.055749 containerd[1563]: time="2025-03-20T17:59:31.055732039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\" id:\"90f350f0660b16b265d0bd4a1b213285c77aac729102a8442bc1171f707c1c6b\" pid:4710 exited_at:{seconds:1742493571 nanos:54801313}" Mar 20 17:59:31.500585 containerd[1563]: time="2025-03-20T17:59:31.499499342Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 17:59:31.503373 containerd[1563]: time="2025-03-20T17:59:31.503351191Z" level=info msg="Container d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:59:31.506231 containerd[1563]: time="2025-03-20T17:59:31.506210517Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\"" Mar 20 17:59:31.508421 containerd[1563]: time="2025-03-20T17:59:31.508381687Z" level=info msg="StartContainer for \"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\"" Mar 20 17:59:31.509197 containerd[1563]: time="2025-03-20T17:59:31.509129787Z" level=info msg="connecting to shim d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0" address="unix:///run/containerd/s/6c9ffd1bda1cca5866e8bb19a8f48bc466916a227ebdedcc9ee7c843d50b07a9" protocol=ttrpc version=3 Mar 20 17:59:31.529055 systemd[1]: Started cri-containerd-d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0.scope - libcontainer container d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0. Mar 20 17:59:31.550032 containerd[1563]: time="2025-03-20T17:59:31.549898574Z" level=info msg="StartContainer for \"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\" returns successfully" Mar 20 17:59:31.559402 systemd[1]: cri-containerd-d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0.scope: Deactivated successfully. Mar 20 17:59:31.559685 systemd[1]: cri-containerd-d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0.scope: Consumed 12ms CPU time, 7.2M memory peak, 1.8M read from disk. Mar 20 17:59:31.560323 containerd[1563]: time="2025-03-20T17:59:31.560108952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\" id:\"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\" pid:4754 exited_at:{seconds:1742493571 nanos:559688696}" Mar 20 17:59:31.560323 containerd[1563]: time="2025-03-20T17:59:31.560193550Z" level=info msg="received exit event container_id:\"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\" id:\"d5c365c590ba2522ef6d3be748a829725547cf4654cadf216efa9fbd492f90d0\" pid:4754 exited_at:{seconds:1742493571 nanos:559688696}" Mar 20 17:59:32.456092 kubelet[2923]: I0320 17:59:32.456050 2923 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T17:59:32Z","lastTransitionTime":"2025-03-20T17:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 17:59:32.503822 containerd[1563]: time="2025-03-20T17:59:32.503289506Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 17:59:32.511358 containerd[1563]: time="2025-03-20T17:59:32.510667123Z" level=info msg="Container 41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:59:32.520543 containerd[1563]: time="2025-03-20T17:59:32.520520609Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\"" Mar 20 17:59:32.522007 containerd[1563]: time="2025-03-20T17:59:32.520880908Z" level=info msg="StartContainer for \"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\"" Mar 20 17:59:32.522007 containerd[1563]: time="2025-03-20T17:59:32.521637272Z" level=info msg="connecting to shim 41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc" address="unix:///run/containerd/s/6c9ffd1bda1cca5866e8bb19a8f48bc466916a227ebdedcc9ee7c843d50b07a9" protocol=ttrpc version=3 Mar 20 17:59:32.537086 systemd[1]: Started cri-containerd-41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc.scope - libcontainer container 41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc. Mar 20 17:59:32.563717 containerd[1563]: time="2025-03-20T17:59:32.563687332Z" level=info msg="StartContainer for \"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\" returns successfully" Mar 20 17:59:32.569281 systemd[1]: cri-containerd-41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc.scope: Deactivated successfully. Mar 20 17:59:32.569901 containerd[1563]: time="2025-03-20T17:59:32.569874973Z" level=info msg="received exit event container_id:\"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\" id:\"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\" pid:4797 exited_at:{seconds:1742493572 nanos:569476534}" Mar 20 17:59:32.570120 containerd[1563]: time="2025-03-20T17:59:32.570101567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\" id:\"41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc\" pid:4797 exited_at:{seconds:1742493572 nanos:569476534}" Mar 20 17:59:32.586657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41184afa17a53f247a413ff17a05cfd3b8e683f2ddd90e1f1665cabab820eabc-rootfs.mount: Deactivated successfully. Mar 20 17:59:33.560972 containerd[1563]: time="2025-03-20T17:59:33.560877992Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 17:59:33.681016 containerd[1563]: time="2025-03-20T17:59:33.680628569Z" level=info msg="Container f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:59:33.712295 containerd[1563]: time="2025-03-20T17:59:33.712215489Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\"" Mar 20 17:59:33.712746 containerd[1563]: time="2025-03-20T17:59:33.712733423Z" level=info msg="StartContainer for \"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\"" Mar 20 17:59:33.713477 containerd[1563]: time="2025-03-20T17:59:33.713457420Z" level=info msg="connecting to shim f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d" address="unix:///run/containerd/s/6c9ffd1bda1cca5866e8bb19a8f48bc466916a227ebdedcc9ee7c843d50b07a9" protocol=ttrpc version=3 Mar 20 17:59:33.729116 systemd[1]: Started cri-containerd-f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d.scope - libcontainer container f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d. Mar 20 17:59:33.747169 systemd[1]: cri-containerd-f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d.scope: Deactivated successfully. Mar 20 17:59:33.747359 containerd[1563]: time="2025-03-20T17:59:33.747220341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\" id:\"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\" pid:4836 exited_at:{seconds:1742493573 nanos:747053248}" Mar 20 17:59:33.767192 containerd[1563]: time="2025-03-20T17:59:33.767090836Z" level=info msg="received exit event container_id:\"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\" id:\"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\" pid:4836 exited_at:{seconds:1742493573 nanos:747053248}" Mar 20 17:59:33.768154 containerd[1563]: time="2025-03-20T17:59:33.768099079Z" level=info msg="StartContainer for \"f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d\" returns successfully" Mar 20 17:59:33.780819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f77d94b37a7aed859e247a05abf41e4e775f590f599e53b58b67f1bc8d9dfa6d-rootfs.mount: Deactivated successfully. Mar 20 17:59:34.565055 containerd[1563]: time="2025-03-20T17:59:34.564643795Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 17:59:34.604031 containerd[1563]: time="2025-03-20T17:59:34.604001643Z" level=info msg="Container 76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:59:34.609584 containerd[1563]: time="2025-03-20T17:59:34.608732860Z" level=info msg="CreateContainer within sandbox \"48eb5a265da2bc6da49510e17270dbd25250fc979992053383fea67a6ff27157\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\"" Mar 20 17:59:34.610153 containerd[1563]: time="2025-03-20T17:59:34.609997692Z" level=info msg="StartContainer for \"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\"" Mar 20 17:59:34.610930 containerd[1563]: time="2025-03-20T17:59:34.610901716Z" level=info msg="connecting to shim 76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64" address="unix:///run/containerd/s/6c9ffd1bda1cca5866e8bb19a8f48bc466916a227ebdedcc9ee7c843d50b07a9" protocol=ttrpc version=3 Mar 20 17:59:34.628107 systemd[1]: Started cri-containerd-76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64.scope - libcontainer container 76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64. Mar 20 17:59:34.649104 containerd[1563]: time="2025-03-20T17:59:34.649065100Z" level=info msg="StartContainer for \"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\" returns successfully" Mar 20 17:59:34.742330 containerd[1563]: time="2025-03-20T17:59:34.742148001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\" id:\"80f6bfda67e46bfd683665ca83a31ac95cf530f519bddbdc7367993cd44f30f9\" pid:4903 exited_at:{seconds:1742493574 nanos:741936281}" Mar 20 17:59:35.277019 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 20 17:59:37.346413 containerd[1563]: time="2025-03-20T17:59:37.345278874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\" id:\"6e6afca3ffcd80af7e8f593e0190459289fedc0c88bd327500452fbdfb851422\" pid:5102 exit_status:1 exited_at:{seconds:1742493577 nanos:344857467}" Mar 20 17:59:38.277308 systemd-networkd[1463]: lxc_health: Link UP Mar 20 17:59:38.298436 systemd-networkd[1463]: lxc_health: Gained carrier Mar 20 17:59:38.947223 kubelet[2923]: I0320 17:59:38.947179 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lqcqh" podStartSLOduration=8.947167422 podStartE2EDuration="8.947167422s" podCreationTimestamp="2025-03-20 17:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:59:35.579877695 +0000 UTC m=+125.574793132" watchObservedRunningTime="2025-03-20 17:59:38.947167422 +0000 UTC m=+128.942082856" Mar 20 17:59:39.516799 containerd[1563]: time="2025-03-20T17:59:39.516747421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\" id:\"0023384b652093261e53a956ee0c365dd3508f7de4323687130cbce306d60470\" pid:5468 exited_at:{seconds:1742493579 nanos:516400393}" Mar 20 17:59:39.576076 systemd-networkd[1463]: lxc_health: Gained IPv6LL Mar 20 17:59:41.607936 containerd[1563]: time="2025-03-20T17:59:41.607407588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\" id:\"f97798b50df67459cc288ff49800f3d850ac142ef17167251ae570e4c43cdb07\" pid:5500 exited_at:{seconds:1742493581 nanos:606945144}" Mar 20 17:59:43.674927 containerd[1563]: time="2025-03-20T17:59:43.674899559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76dd537c0be13b19afcfde304e84be9e3c8deb9713afa711fa8bcfda0bc20d64\" id:\"249463f8ea2d2b23efe674c2c8c6cff32114606692158ab34d8d8274a01a47aa\" pid:5521 exited_at:{seconds:1742493583 nanos:674573988}" Mar 20 17:59:43.697960 sshd[4643]: Connection closed by 139.178.68.195 port 51890 Mar 20 17:59:43.700777 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Mar 20 17:59:43.702898 systemd[1]: sshd@26-139.178.70.103:22-139.178.68.195:51890.service: Deactivated successfully. Mar 20 17:59:43.704071 systemd[1]: session-29.scope: Deactivated successfully. Mar 20 17:59:43.704394 systemd-logind[1540]: Session 29 logged out. Waiting for processes to exit. Mar 20 17:59:43.705577 systemd-logind[1540]: Removed session 29.