May 13 00:02:24.740495 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:20:27 -00 2025 May 13 00:02:24.740512 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 00:02:24.740518 kernel: Disabled fast string operations May 13 00:02:24.740523 kernel: BIOS-provided physical RAM map: May 13 00:02:24.740527 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 13 00:02:24.740531 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 13 00:02:24.740537 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 13 00:02:24.740542 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 13 00:02:24.740546 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 13 00:02:24.740550 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 13 00:02:24.740554 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 13 00:02:24.740559 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 13 00:02:24.740563 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 13 00:02:24.740567 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 00:02:24.740574 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 13 00:02:24.740579 kernel: NX (Execute Disable) protection: active May 13 00:02:24.740583 kernel: APIC: Static calls initialized May 13 00:02:24.740588 kernel: SMBIOS 2.7 present. May 13 00:02:24.740593 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 13 00:02:24.740598 kernel: vmware: hypercall mode: 0x00 May 13 00:02:24.740603 kernel: Hypervisor detected: VMware May 13 00:02:24.740608 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 13 00:02:24.740614 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 13 00:02:24.740619 kernel: vmware: using clock offset of 3654249489 ns May 13 00:02:24.740624 kernel: tsc: Detected 3408.000 MHz processor May 13 00:02:24.740629 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:02:24.740635 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:02:24.740640 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 13 00:02:24.740645 kernel: total RAM covered: 3072M May 13 00:02:24.740650 kernel: Found optimal setting for mtrr clean up May 13 00:02:24.740655 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 13 00:02:24.740660 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 13 00:02:24.740667 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:02:24.740672 kernel: Using GB pages for direct mapping May 13 00:02:24.740677 kernel: ACPI: Early table checksum verification disabled May 13 00:02:24.740682 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 13 00:02:24.740687 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 13 00:02:24.740692 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 13 00:02:24.740697 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 13 00:02:24.740702 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:02:24.740709 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:02:24.740715 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 13 00:02:24.740720 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 13 00:02:24.740725 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 13 00:02:24.740730 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 13 00:02:24.740735 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 13 00:02:24.740742 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 13 00:02:24.740747 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 13 00:02:24.740752 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 13 00:02:24.740757 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:02:24.740762 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:02:24.740767 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 13 00:02:24.740773 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 13 00:02:24.740778 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 13 00:02:24.740783 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 13 00:02:24.740789 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 13 00:02:24.740794 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 13 00:02:24.740799 kernel: system APIC only can use physical flat May 13 00:02:24.740804 kernel: APIC: Switched APIC routing to: physical flat May 13 00:02:24.740810 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 00:02:24.740815 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 13 00:02:24.740820 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 13 00:02:24.740825 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 13 00:02:24.740830 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 13 00:02:24.740835 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 13 00:02:24.740841 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 13 00:02:24.740846 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 13 00:02:24.740851 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 13 00:02:24.740856 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 13 00:02:24.740861 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 13 00:02:24.740866 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 13 00:02:24.740871 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 13 00:02:24.740876 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 13 00:02:24.740881 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 13 00:02:24.740887 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 13 00:02:24.740893 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 13 00:02:24.740898 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 13 00:02:24.740903 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 13 00:02:24.740907 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 13 00:02:24.740913 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 13 00:02:24.740918 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 13 00:02:24.740923 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 13 00:02:24.740928 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 13 00:02:24.740933 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 13 00:02:24.740939 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 13 00:02:24.740944 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 13 00:02:24.740949 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 13 00:02:24.740954 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 13 00:02:24.740960 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 13 00:02:24.740965 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 13 00:02:24.740970 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 13 00:02:24.740975 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 13 00:02:24.740980 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 13 00:02:24.740985 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 13 00:02:24.740991 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 13 00:02:24.740996 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 13 00:02:24.741001 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 13 00:02:24.741006 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 13 00:02:24.741011 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 13 00:02:24.741016 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 13 00:02:24.741021 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 13 00:02:24.741026 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 13 00:02:24.741031 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 13 00:02:24.741036 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 13 00:02:24.741043 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 13 00:02:24.741048 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 13 00:02:24.741053 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 13 00:02:24.741058 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 13 00:02:24.741063 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 13 00:02:24.741068 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 13 00:02:24.741073 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 13 00:02:24.741078 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 13 00:02:24.741083 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 13 00:02:24.741088 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 13 00:02:24.741094 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 13 00:02:24.741100 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 13 00:02:24.741104 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 13 00:02:24.741110 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 13 00:02:24.741119 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 13 00:02:24.741124 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 13 00:02:24.741130 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 13 00:02:24.741135 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 13 00:02:24.741141 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 13 00:02:24.741147 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 13 00:02:24.741152 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 13 00:02:24.741158 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 13 00:02:24.741163 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 13 00:02:24.741168 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 13 00:02:24.741174 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 13 00:02:24.741179 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 13 00:02:24.741185 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 13 00:02:24.741190 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 13 00:02:24.741196 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 13 00:02:24.741202 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 13 00:02:24.741207 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 13 00:02:24.741212 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 13 00:02:24.741218 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 13 00:02:24.741223 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 13 00:02:24.741229 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 13 00:02:24.741234 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 13 00:02:24.741239 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 13 00:02:24.741245 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 13 00:02:24.741251 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 13 00:02:24.741257 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 13 00:02:24.741262 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 13 00:02:24.741267 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 13 00:02:24.741273 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 13 00:02:24.741278 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 13 00:02:24.741283 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 13 00:02:24.741288 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 13 00:02:24.741294 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 13 00:02:24.741299 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 13 00:02:24.741306 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 13 00:02:24.741311 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 13 00:02:24.741317 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 13 00:02:24.741322 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 13 00:02:24.741509 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 13 00:02:24.741515 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 13 00:02:24.741520 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 13 00:02:24.741525 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 13 00:02:24.741531 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 13 00:02:24.741536 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 13 00:02:24.741544 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 13 00:02:24.741549 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 13 00:02:24.741555 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 13 00:02:24.741560 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 13 00:02:24.741566 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 13 00:02:24.741571 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 13 00:02:24.741576 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 13 00:02:24.741581 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 13 00:02:24.741587 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 13 00:02:24.741643 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 13 00:02:24.741652 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 13 00:02:24.741657 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 13 00:02:24.741663 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 13 00:02:24.741668 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 13 00:02:24.741673 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 13 00:02:24.741679 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 13 00:02:24.741684 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 13 00:02:24.741690 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 13 00:02:24.741695 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 13 00:02:24.741700 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 13 00:02:24.741707 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 13 00:02:24.741713 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 13 00:02:24.741718 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 13 00:02:24.741723 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 13 00:02:24.741729 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 13 00:02:24.741734 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 00:02:24.741740 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 00:02:24.741746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 13 00:02:24.741751 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 13 00:02:24.741758 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 13 00:02:24.741764 kernel: Zone ranges: May 13 00:02:24.741770 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:02:24.741775 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 13 00:02:24.741781 kernel: Normal empty May 13 00:02:24.741786 kernel: Movable zone start for each node May 13 00:02:24.741792 kernel: Early memory node ranges May 13 00:02:24.741797 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 13 00:02:24.741803 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 13 00:02:24.741808 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 13 00:02:24.741815 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 13 00:02:24.741821 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:02:24.741826 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 13 00:02:24.741832 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 13 00:02:24.741837 kernel: ACPI: PM-Timer IO Port: 0x1008 May 13 00:02:24.741842 kernel: system APIC only can use physical flat May 13 00:02:24.741848 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 13 00:02:24.741854 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 00:02:24.741859 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 00:02:24.741866 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 00:02:24.741871 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 00:02:24.741876 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 00:02:24.741882 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 00:02:24.741887 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 00:02:24.741893 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 00:02:24.741898 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 00:02:24.741904 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 00:02:24.741909 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 00:02:24.741914 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 00:02:24.741921 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 00:02:24.741927 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 00:02:24.741932 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 00:02:24.741938 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 00:02:24.741943 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 13 00:02:24.741948 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 13 00:02:24.741954 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 13 00:02:24.741959 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 13 00:02:24.741964 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 13 00:02:24.741971 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 13 00:02:24.741977 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 13 00:02:24.741982 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 13 00:02:24.741987 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 13 00:02:24.741993 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 13 00:02:24.741998 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 13 00:02:24.742004 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 13 00:02:24.742024 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 13 00:02:24.742031 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 13 00:02:24.742036 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 13 00:02:24.742043 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 13 00:02:24.742049 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 13 00:02:24.742054 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 13 00:02:24.742060 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 13 00:02:24.742065 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 13 00:02:24.742070 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 13 00:02:24.742076 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 13 00:02:24.742081 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 13 00:02:24.742087 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 13 00:02:24.742093 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 13 00:02:24.742099 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 13 00:02:24.742104 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 13 00:02:24.742110 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 13 00:02:24.742115 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 13 00:02:24.742121 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 13 00:02:24.742126 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 13 00:02:24.742132 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 13 00:02:24.742137 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 13 00:02:24.742142 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 13 00:02:24.742149 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 13 00:02:24.742155 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 13 00:02:24.742160 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 13 00:02:24.742165 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 13 00:02:24.742171 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 13 00:02:24.742176 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 13 00:02:24.742181 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 13 00:02:24.742187 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 13 00:02:24.742192 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 13 00:02:24.742198 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 13 00:02:24.742204 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 13 00:02:24.742210 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 13 00:02:24.742215 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 13 00:02:24.742220 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 13 00:02:24.742226 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 13 00:02:24.742232 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 13 00:02:24.742237 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 13 00:02:24.742242 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 13 00:02:24.742248 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 13 00:02:24.742255 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 13 00:02:24.742260 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 13 00:02:24.742266 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 13 00:02:24.742271 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 13 00:02:24.742277 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 13 00:02:24.742282 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 13 00:02:24.742287 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 13 00:02:24.742293 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 13 00:02:24.742298 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 13 00:02:24.742303 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 13 00:02:24.742310 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 13 00:02:24.742316 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 13 00:02:24.742321 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 13 00:02:24.742333 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 13 00:02:24.742338 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 13 00:02:24.742343 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 13 00:02:24.742349 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 13 00:02:24.742354 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 13 00:02:24.742359 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 13 00:02:24.742365 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 13 00:02:24.742372 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 13 00:02:24.742377 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 13 00:02:24.742382 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 13 00:02:24.742388 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 13 00:02:24.742427 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 13 00:02:24.742433 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 13 00:02:24.742438 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 13 00:02:24.742444 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 13 00:02:24.742449 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 13 00:02:24.742456 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 13 00:02:24.742462 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 13 00:02:24.742467 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 13 00:02:24.742473 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 13 00:02:24.742478 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 13 00:02:24.742483 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 13 00:02:24.742489 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 13 00:02:24.742494 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 13 00:02:24.742500 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 13 00:02:24.742505 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 13 00:02:24.742512 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 13 00:02:24.742517 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 13 00:02:24.742523 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 13 00:02:24.742528 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 13 00:02:24.742534 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 13 00:02:24.742539 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 13 00:02:24.742545 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 13 00:02:24.742550 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 13 00:02:24.742556 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 13 00:02:24.742561 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 13 00:02:24.742568 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 13 00:02:24.742573 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 13 00:02:24.742579 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 13 00:02:24.742584 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 13 00:02:24.742590 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 13 00:02:24.742595 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 13 00:02:24.742601 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 13 00:02:24.742606 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 13 00:02:24.742611 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 13 00:02:24.742618 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 13 00:02:24.742624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 13 00:02:24.742630 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:02:24.742635 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 13 00:02:24.742641 kernel: TSC deadline timer available May 13 00:02:24.742646 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 13 00:02:24.742652 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 13 00:02:24.742660 kernel: Booting paravirtualized kernel on VMware hypervisor May 13 00:02:24.742668 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:02:24.742677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 13 00:02:24.742689 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 13 00:02:24.742697 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 13 00:02:24.742707 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 13 00:02:24.742713 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 13 00:02:24.742719 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 13 00:02:24.742724 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 13 00:02:24.742729 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 13 00:02:24.742742 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 13 00:02:24.742752 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 13 00:02:24.742761 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 13 00:02:24.742767 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 13 00:02:24.742772 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 13 00:02:24.742778 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 13 00:02:24.742784 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 13 00:02:24.742789 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 13 00:02:24.742795 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 13 00:02:24.742801 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 13 00:02:24.742808 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 13 00:02:24.742815 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 00:02:24.742821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:02:24.742830 kernel: random: crng init done May 13 00:02:24.742839 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 00:02:24.742850 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 13 00:02:24.742858 kernel: printk: log_buf_len min size: 262144 bytes May 13 00:02:24.742864 kernel: printk: log_buf_len: 1048576 bytes May 13 00:02:24.742872 kernel: printk: early log buf free: 239648(91%) May 13 00:02:24.742877 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:02:24.742883 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 00:02:24.742892 kernel: Fallback order for Node 0: 0 May 13 00:02:24.742901 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 13 00:02:24.742906 kernel: Policy zone: DMA32 May 13 00:02:24.742915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:02:24.742926 kernel: Memory: 1932260K/2096628K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 164108K reserved, 0K cma-reserved) May 13 00:02:24.742934 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 13 00:02:24.742945 kernel: ftrace: allocating 37993 entries in 149 pages May 13 00:02:24.742956 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:02:24.742967 kernel: Dynamic Preempt: voluntary May 13 00:02:24.742973 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:02:24.742981 kernel: rcu: RCU event tracing is enabled. May 13 00:02:24.742989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 13 00:02:24.742997 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:02:24.743003 kernel: Rude variant of Tasks RCU enabled. May 13 00:02:24.743009 kernel: Tracing variant of Tasks RCU enabled. May 13 00:02:24.743015 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:02:24.743021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 13 00:02:24.743030 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 13 00:02:24.743040 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 13 00:02:24.743050 kernel: Console: colour VGA+ 80x25 May 13 00:02:24.743060 kernel: printk: console [tty0] enabled May 13 00:02:24.743072 kernel: printk: console [ttyS0] enabled May 13 00:02:24.743079 kernel: ACPI: Core revision 20230628 May 13 00:02:24.743085 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 13 00:02:24.743091 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:02:24.743097 kernel: x2apic enabled May 13 00:02:24.743104 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:02:24.743115 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:02:24.743125 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:02:24.743135 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 13 00:02:24.743144 kernel: Disabled fast string operations May 13 00:02:24.743150 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 00:02:24.743159 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 00:02:24.743170 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:02:24.743180 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 13 00:02:24.743191 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 13 00:02:24.743202 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 00:02:24.743211 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 00:02:24.743217 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 00:02:24.743225 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:02:24.743231 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:02:24.743240 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 00:02:24.743251 kernel: SRBDS: Unknown: Dependent on hypervisor status May 13 00:02:24.743261 kernel: GDS: Unknown: Dependent on hypervisor status May 13 00:02:24.743270 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:02:24.743276 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:02:24.743284 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:02:24.743291 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:02:24.743299 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:02:24.743305 kernel: Freeing SMP alternatives memory: 32K May 13 00:02:24.743311 kernel: pid_max: default: 131072 minimum: 1024 May 13 00:02:24.743317 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:02:24.743333 kernel: landlock: Up and running. May 13 00:02:24.743345 kernel: SELinux: Initializing. May 13 00:02:24.743353 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:02:24.743359 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:02:24.743365 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 00:02:24.743373 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:02:24.743379 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:02:24.743386 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:02:24.743396 kernel: Performance Events: Skylake events, core PMU driver. May 13 00:02:24.743404 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 13 00:02:24.743410 kernel: core: CPUID marked event: 'instructions' unavailable May 13 00:02:24.743416 kernel: core: CPUID marked event: 'bus cycles' unavailable May 13 00:02:24.743422 kernel: core: CPUID marked event: 'cache references' unavailable May 13 00:02:24.743430 kernel: core: CPUID marked event: 'cache misses' unavailable May 13 00:02:24.743443 kernel: core: CPUID marked event: 'branch instructions' unavailable May 13 00:02:24.743453 kernel: core: CPUID marked event: 'branch misses' unavailable May 13 00:02:24.743462 kernel: ... version: 1 May 13 00:02:24.743469 kernel: ... bit width: 48 May 13 00:02:24.743475 kernel: ... generic registers: 4 May 13 00:02:24.743484 kernel: ... value mask: 0000ffffffffffff May 13 00:02:24.743490 kernel: ... max period: 000000007fffffff May 13 00:02:24.743496 kernel: ... fixed-purpose events: 0 May 13 00:02:24.743502 kernel: ... event mask: 000000000000000f May 13 00:02:24.743509 kernel: signal: max sigframe size: 1776 May 13 00:02:24.743515 kernel: rcu: Hierarchical SRCU implementation. May 13 00:02:24.743522 kernel: rcu: Max phase no-delay instances is 400. May 13 00:02:24.743532 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 00:02:24.743542 kernel: smp: Bringing up secondary CPUs ... May 13 00:02:24.743551 kernel: smpboot: x86: Booting SMP configuration: May 13 00:02:24.743557 kernel: .... node #0, CPUs: #1 May 13 00:02:24.743563 kernel: Disabled fast string operations May 13 00:02:24.743572 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 13 00:02:24.743584 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 13 00:02:24.743596 kernel: smp: Brought up 1 node, 2 CPUs May 13 00:02:24.743602 kernel: smpboot: Max logical packages: 128 May 13 00:02:24.743608 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 13 00:02:24.743614 kernel: devtmpfs: initialized May 13 00:02:24.743621 kernel: x86/mm: Memory block size: 128MB May 13 00:02:24.743627 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 13 00:02:24.743635 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:02:24.743645 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 00:02:24.743656 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:02:24.743662 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:02:24.743668 kernel: audit: initializing netlink subsys (disabled) May 13 00:02:24.743674 kernel: audit: type=2000 audit(1747094543.067:1): state=initialized audit_enabled=0 res=1 May 13 00:02:24.743679 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:02:24.743685 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:02:24.743691 kernel: cpuidle: using governor menu May 13 00:02:24.743700 kernel: Simple Boot Flag at 0x36 set to 0x80 May 13 00:02:24.743710 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:02:24.743718 kernel: dca service started, version 1.12.1 May 13 00:02:24.743724 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 13 00:02:24.743730 kernel: PCI: Using configuration type 1 for base access May 13 00:02:24.743740 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:02:24.743749 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:02:24.743758 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:02:24.743769 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:02:24.743780 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:02:24.743792 kernel: ACPI: Added _OSI(Module Device) May 13 00:02:24.743803 kernel: ACPI: Added _OSI(Processor Device) May 13 00:02:24.743814 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:02:24.743824 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:02:24.743834 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:02:24.743845 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 13 00:02:24.743856 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:02:24.743866 kernel: ACPI: Interpreter enabled May 13 00:02:24.743875 kernel: ACPI: PM: (supports S0 S1 S5) May 13 00:02:24.743885 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:02:24.743899 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:02:24.743909 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:02:24.743917 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 13 00:02:24.743923 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 13 00:02:24.744026 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:02:24.744099 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 13 00:02:24.744173 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 13 00:02:24.744186 kernel: PCI host bridge to bus 0000:00 May 13 00:02:24.744264 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:02:24.744350 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 13 00:02:24.744414 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:02:24.744472 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:02:24.744536 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 13 00:02:24.744611 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 13 00:02:24.744697 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 13 00:02:24.744767 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 13 00:02:24.744846 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 13 00:02:24.744936 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 13 00:02:24.745003 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 13 00:02:24.745071 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 00:02:24.745143 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 00:02:24.745208 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 00:02:24.745272 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 00:02:24.745340 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 13 00:02:24.745406 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 13 00:02:24.745459 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 13 00:02:24.745520 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 13 00:02:24.745585 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 13 00:02:24.745638 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 13 00:02:24.745694 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 13 00:02:24.745746 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 13 00:02:24.745797 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 13 00:02:24.745848 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 13 00:02:24.745902 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 13 00:02:24.745966 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:02:24.746022 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 13 00:02:24.746082 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.746134 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 13 00:02:24.746192 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.746248 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 13 00:02:24.748377 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.748441 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 13 00:02:24.748503 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.748558 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 13 00:02:24.748619 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.748680 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 13 00:02:24.748754 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.748807 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 13 00:02:24.748864 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.748919 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 13 00:02:24.748977 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.749031 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 13 00:02:24.749087 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.749143 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 13 00:02:24.749205 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.749258 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 13 00:02:24.749317 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.751437 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 13 00:02:24.751520 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.751584 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 13 00:02:24.751650 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.751712 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 13 00:02:24.751787 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.751851 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 13 00:02:24.751912 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.751964 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 13 00:02:24.752025 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752086 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 13 00:02:24.752158 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752212 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 13 00:02:24.752273 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752335 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 13 00:02:24.752392 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752461 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 13 00:02:24.752526 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752579 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 13 00:02:24.752637 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752689 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 13 00:02:24.752758 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752812 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 13 00:02:24.752868 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.752920 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 13 00:02:24.752980 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.753038 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 13 00:02:24.753103 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.753155 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 13 00:02:24.753211 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.753264 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 13 00:02:24.753842 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.753914 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 13 00:02:24.753986 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.754054 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 13 00:02:24.754112 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.754166 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 13 00:02:24.754296 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.754419 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 13 00:02:24.754496 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.754563 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 13 00:02:24.754639 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 13 00:02:24.754696 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 13 00:02:24.754751 kernel: pci_bus 0000:01: extended config space not accessible May 13 00:02:24.754812 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:02:24.754877 kernel: pci_bus 0000:02: extended config space not accessible May 13 00:02:24.754887 kernel: acpiphp: Slot [32] registered May 13 00:02:24.754894 kernel: acpiphp: Slot [33] registered May 13 00:02:24.754900 kernel: acpiphp: Slot [34] registered May 13 00:02:24.754908 kernel: acpiphp: Slot [35] registered May 13 00:02:24.754915 kernel: acpiphp: Slot [36] registered May 13 00:02:24.754922 kernel: acpiphp: Slot [37] registered May 13 00:02:24.754929 kernel: acpiphp: Slot [38] registered May 13 00:02:24.754938 kernel: acpiphp: Slot [39] registered May 13 00:02:24.754944 kernel: acpiphp: Slot [40] registered May 13 00:02:24.754950 kernel: acpiphp: Slot [41] registered May 13 00:02:24.754956 kernel: acpiphp: Slot [42] registered May 13 00:02:24.754962 kernel: acpiphp: Slot [43] registered May 13 00:02:24.754967 kernel: acpiphp: Slot [44] registered May 13 00:02:24.754973 kernel: acpiphp: Slot [45] registered May 13 00:02:24.754979 kernel: acpiphp: Slot [46] registered May 13 00:02:24.754985 kernel: acpiphp: Slot [47] registered May 13 00:02:24.754992 kernel: acpiphp: Slot [48] registered May 13 00:02:24.755002 kernel: acpiphp: Slot [49] registered May 13 00:02:24.755013 kernel: acpiphp: Slot [50] registered May 13 00:02:24.755019 kernel: acpiphp: Slot [51] registered May 13 00:02:24.755025 kernel: acpiphp: Slot [52] registered May 13 00:02:24.755031 kernel: acpiphp: Slot [53] registered May 13 00:02:24.755037 kernel: acpiphp: Slot [54] registered May 13 00:02:24.755043 kernel: acpiphp: Slot [55] registered May 13 00:02:24.755049 kernel: acpiphp: Slot [56] registered May 13 00:02:24.755059 kernel: acpiphp: Slot [57] registered May 13 00:02:24.755068 kernel: acpiphp: Slot [58] registered May 13 00:02:24.755074 kernel: acpiphp: Slot [59] registered May 13 00:02:24.755080 kernel: acpiphp: Slot [60] registered May 13 00:02:24.755086 kernel: acpiphp: Slot [61] registered May 13 00:02:24.755092 kernel: acpiphp: Slot [62] registered May 13 00:02:24.755098 kernel: acpiphp: Slot [63] registered May 13 00:02:24.755168 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 00:02:24.755254 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:02:24.755308 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:02:24.755415 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:02:24.755476 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 13 00:02:24.755545 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 13 00:02:24.755605 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 13 00:02:24.755661 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 13 00:02:24.755714 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 13 00:02:24.755787 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 13 00:02:24.755848 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 13 00:02:24.755921 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 13 00:02:24.755975 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:02:24.756028 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 00:02:24.756080 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:02:24.756135 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:02:24.756200 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:02:24.756259 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:02:24.756329 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:02:24.756398 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:02:24.756462 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:02:24.756517 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:02:24.756588 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:02:24.756642 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:02:24.756694 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:02:24.756749 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:02:24.756806 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:02:24.756862 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:02:24.756917 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:02:24.756989 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:02:24.757052 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:02:24.757103 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:02:24.757172 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:02:24.757239 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:02:24.757304 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:02:24.757373 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:02:24.757430 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:02:24.757498 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:02:24.757568 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:02:24.757623 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:02:24.757679 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:02:24.757772 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 13 00:02:24.757861 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 13 00:02:24.757928 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 13 00:02:24.758001 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 13 00:02:24.758068 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 13 00:02:24.758124 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:02:24.758195 kernel: pci 0000:0b:00.0: supports D1 D2 May 13 00:02:24.758259 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 00:02:24.758314 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:02:24.758393 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:02:24.758455 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:02:24.758518 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:02:24.758596 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:02:24.758652 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:02:24.758703 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:02:24.758761 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:02:24.758832 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:02:24.758892 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:02:24.758960 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:02:24.759022 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:02:24.759088 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:02:24.759148 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:02:24.759219 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:02:24.759274 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:02:24.762065 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:02:24.762151 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:02:24.762219 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:02:24.762291 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:02:24.762372 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:02:24.762433 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:02:24.762497 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:02:24.762566 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:02:24.762633 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:02:24.762704 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:02:24.762767 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:02:24.762833 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:02:24.762900 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:02:24.762961 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:02:24.763018 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:02:24.763091 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:02:24.763157 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:02:24.763221 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:02:24.763290 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:02:24.763426 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:02:24.763493 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:02:24.763560 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:02:24.763614 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:02:24.763671 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:02:24.763730 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:02:24.763795 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:02:24.763862 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:02:24.763938 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:02:24.764002 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:02:24.764071 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:02:24.764133 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:02:24.764201 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:02:24.764269 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:02:24.764352 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:02:24.764433 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:02:24.764497 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:02:24.764570 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:02:24.764631 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:02:24.764697 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:02:24.764762 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:02:24.764828 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:02:24.764894 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:02:24.764962 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:02:24.765038 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:02:24.765109 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:02:24.765175 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:02:24.765239 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:02:24.765297 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:02:24.765403 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:02:24.765481 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:02:24.765552 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:02:24.765620 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:02:24.765683 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:02:24.765745 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:02:24.765807 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:02:24.765862 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:02:24.765914 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:02:24.765969 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:02:24.766031 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:02:24.766095 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:02:24.766150 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:02:24.766204 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:02:24.766256 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:02:24.766312 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:02:24.766329 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 13 00:02:24.766338 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 13 00:02:24.766347 kernel: ACPI: PCI: Interrupt link LNKB disabled May 13 00:02:24.766354 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:02:24.766360 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 13 00:02:24.766366 kernel: iommu: Default domain type: Translated May 13 00:02:24.766372 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:02:24.766378 kernel: PCI: Using ACPI for IRQ routing May 13 00:02:24.766384 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:02:24.766391 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 13 00:02:24.766397 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 13 00:02:24.766466 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 13 00:02:24.766526 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 13 00:02:24.766579 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:02:24.766588 kernel: vgaarb: loaded May 13 00:02:24.766595 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 13 00:02:24.766601 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 13 00:02:24.766607 kernel: clocksource: Switched to clocksource tsc-early May 13 00:02:24.766613 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:02:24.766619 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:02:24.766630 kernel: pnp: PnP ACPI init May 13 00:02:24.766694 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 13 00:02:24.766757 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 13 00:02:24.766808 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 13 00:02:24.766860 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 13 00:02:24.766922 kernel: pnp 00:06: [dma 2] May 13 00:02:24.766982 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 13 00:02:24.767050 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 13 00:02:24.767106 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 13 00:02:24.767115 kernel: pnp: PnP ACPI: found 8 devices May 13 00:02:24.767122 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:02:24.767128 kernel: NET: Registered PF_INET protocol family May 13 00:02:24.767134 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:02:24.767140 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 00:02:24.767149 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:02:24.767164 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 00:02:24.767171 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 00:02:24.767177 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 00:02:24.767183 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:02:24.767189 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:02:24.767196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:02:24.767202 kernel: NET: Registered PF_XDP protocol family May 13 00:02:24.767260 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 00:02:24.767349 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 13 00:02:24.767417 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 13 00:02:24.767473 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 13 00:02:24.767530 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 13 00:02:24.767585 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 13 00:02:24.767660 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 13 00:02:24.767730 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 13 00:02:24.767785 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 13 00:02:24.767842 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 13 00:02:24.767905 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 13 00:02:24.767960 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 13 00:02:24.768022 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 13 00:02:24.768081 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 13 00:02:24.768146 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 13 00:02:24.768204 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 13 00:02:24.768273 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 13 00:02:24.768446 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 13 00:02:24.768520 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 13 00:02:24.768579 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 13 00:02:24.768635 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 13 00:02:24.768691 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 13 00:02:24.768749 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 13 00:02:24.768804 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:02:24.768883 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:02:24.768956 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769010 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769067 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769132 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769195 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769252 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769308 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769412 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769480 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769545 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769601 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769652 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769707 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769772 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769827 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769886 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.769944 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.769997 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770068 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770138 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770192 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770247 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770309 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770388 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770443 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770506 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770562 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770630 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770700 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770760 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770813 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770873 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.770937 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.770994 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771051 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771105 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771158 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771213 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771281 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771358 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771413 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771469 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771523 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771587 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771661 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771725 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771779 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771830 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.771886 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.771955 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.772643 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.772715 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.772777 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:02:24.772835 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.772905 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.772973 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773027 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773084 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773160 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773231 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773285 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773355 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773420 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773491 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773548 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773601 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773652 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773708 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773776 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773841 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.773906 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.773961 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.774016 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.774079 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.774145 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:02:24.774203 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.774288 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:02:24.776389 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.776463 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:02:24.776524 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.776606 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:02:24.776669 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.776724 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:02:24.776787 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:02:24.776856 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:02:24.776923 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 13 00:02:24.776993 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:02:24.777060 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:02:24.777118 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:02:24.777193 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 13 00:02:24.777256 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:02:24.777310 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:02:24.777390 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:02:24.777452 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:02:24.777523 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:02:24.777589 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:02:24.777644 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:02:24.777700 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:02:24.777769 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:02:24.777838 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:02:24.777892 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:02:24.777948 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:02:24.778016 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:02:24.778085 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:02:24.778150 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:02:24.778214 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:02:24.778273 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:02:24.778565 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:02:24.778673 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:02:24.778985 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:02:24.779075 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:02:24.779144 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:02:24.779201 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:02:24.779272 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:02:24.779745 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:02:24.779816 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:02:24.779876 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:02:24.779937 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 13 00:02:24.780005 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:02:24.780062 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:02:24.780118 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:02:24.780180 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:02:24.780252 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:02:24.780314 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:02:24.780389 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:02:24.780444 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:02:24.780498 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:02:24.780557 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:02:24.780626 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:02:24.780679 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:02:24.780745 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:02:24.780808 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:02:24.780870 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:02:24.780929 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:02:24.780995 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:02:24.781054 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:02:24.781123 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:02:24.781179 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:02:24.781236 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:02:24.781294 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:02:24.781397 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:02:24.781460 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:02:24.781526 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:02:24.781582 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:02:24.781637 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:02:24.781702 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:02:24.781755 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:02:24.781810 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:02:24.781879 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:02:24.781945 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:02:24.781999 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:02:24.782068 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:02:24.782130 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:02:24.782190 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:02:24.782245 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:02:24.782298 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:02:24.782362 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:02:24.782420 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:02:24.782486 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:02:24.782543 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:02:24.782607 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:02:24.782665 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:02:24.782721 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:02:24.782788 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:02:24.782842 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:02:24.782907 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:02:24.782965 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:02:24.783023 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:02:24.783082 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:02:24.783144 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:02:24.783203 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:02:24.783266 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:02:24.783906 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:02:24.783978 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:02:24.784033 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:02:24.784096 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:02:24.784159 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:02:24.784212 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:02:24.784271 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:02:24.784329 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:02:24.786380 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:02:24.786448 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:02:24.786504 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:02:24.786561 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:02:24.786614 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:02:24.786678 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:02:24.786753 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:02:24.786813 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:02:24.786878 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:02:24.786949 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:02:24.787023 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:02:24.787077 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:02:24.787130 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:02:24.787183 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:02:24.787247 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:02:24.787306 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:02:24.787389 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:02:24.787458 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:02:24.787525 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:02:24.787588 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:02:24.787646 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:02:24.787693 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 13 00:02:24.787743 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 13 00:02:24.787817 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 13 00:02:24.787869 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 13 00:02:24.787917 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:02:24.787968 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:02:24.788023 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:02:24.788078 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:02:24.788136 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 13 00:02:24.788206 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 13 00:02:24.788272 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 13 00:02:24.788322 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 13 00:02:24.789095 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:02:24.789155 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 13 00:02:24.789212 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 13 00:02:24.789261 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:02:24.789342 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 13 00:02:24.789402 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 13 00:02:24.789460 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:02:24.789521 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 13 00:02:24.789569 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:02:24.789621 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 13 00:02:24.789684 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:02:24.789751 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 13 00:02:24.789804 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:02:24.789863 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 13 00:02:24.789914 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:02:24.789969 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 13 00:02:24.790033 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:02:24.790109 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 13 00:02:24.790159 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 13 00:02:24.790206 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:02:24.790262 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 13 00:02:24.790316 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 13 00:02:24.790491 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:02:24.790555 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 13 00:02:24.790622 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 13 00:02:24.790679 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:02:24.790732 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 13 00:02:24.790779 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:02:24.790835 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 13 00:02:24.790888 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:02:24.790943 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 13 00:02:24.790991 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:02:24.791054 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 13 00:02:24.791106 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:02:24.791170 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 13 00:02:24.791221 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:02:24.791279 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 13 00:02:24.791337 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 13 00:02:24.791388 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:02:24.791449 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 13 00:02:24.791501 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 13 00:02:24.791561 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:02:24.791617 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 13 00:02:24.791683 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 13 00:02:24.791733 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:02:24.791790 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 13 00:02:24.791847 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:02:24.791903 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 13 00:02:24.791955 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:02:24.792024 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 13 00:02:24.792078 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:02:24.792136 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 13 00:02:24.792195 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:02:24.792261 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 13 00:02:24.792311 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:02:24.792395 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 13 00:02:24.792452 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 13 00:02:24.792507 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:02:24.792566 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 13 00:02:24.792625 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 13 00:02:24.792677 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:02:24.792747 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 13 00:02:24.792797 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:02:24.792853 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 13 00:02:24.792904 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:02:24.792956 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 13 00:02:24.793008 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:02:24.793063 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 13 00:02:24.793111 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:02:24.793162 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 13 00:02:24.793210 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:02:24.793261 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 13 00:02:24.793309 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:02:24.793409 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 00:02:24.793420 kernel: PCI: CLS 32 bytes, default 64 May 13 00:02:24.793427 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 00:02:24.793434 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:02:24.793440 kernel: clocksource: Switched to clocksource tsc May 13 00:02:24.793447 kernel: Initialise system trusted keyrings May 13 00:02:24.793454 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 00:02:24.793460 kernel: Key type asymmetric registered May 13 00:02:24.793466 kernel: Asymmetric key parser 'x509' registered May 13 00:02:24.793475 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:02:24.793482 kernel: io scheduler mq-deadline registered May 13 00:02:24.793488 kernel: io scheduler kyber registered May 13 00:02:24.793495 kernel: io scheduler bfq registered May 13 00:02:24.793551 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 13 00:02:24.793606 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.793662 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 13 00:02:24.793714 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.793770 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 13 00:02:24.793824 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.793877 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 13 00:02:24.793931 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.793985 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 13 00:02:24.794039 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794096 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 13 00:02:24.794149 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794202 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 13 00:02:24.794255 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794308 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 13 00:02:24.794468 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794523 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 13 00:02:24.794577 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794630 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 13 00:02:24.794682 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794737 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 13 00:02:24.794789 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794846 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 13 00:02:24.794899 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.794953 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 13 00:02:24.795005 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795057 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 13 00:02:24.795110 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795166 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 13 00:02:24.795218 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795275 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 13 00:02:24.795343 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795404 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 13 00:02:24.795460 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795514 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 13 00:02:24.795567 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795620 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 13 00:02:24.795672 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795725 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 13 00:02:24.795778 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795834 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 13 00:02:24.795887 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.795940 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 13 00:02:24.795993 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796047 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 13 00:02:24.796102 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796156 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 13 00:02:24.796209 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796261 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 13 00:02:24.796313 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796394 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 13 00:02:24.796450 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796503 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 13 00:02:24.796556 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796609 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 13 00:02:24.796661 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796713 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 13 00:02:24.796767 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796819 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 13 00:02:24.796873 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.796926 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 13 00:02:24.796978 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.797043 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 13 00:02:24.797097 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:02:24.797107 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:02:24.797114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:02:24.797120 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:02:24.797127 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 13 00:02:24.797133 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:02:24.797142 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:02:24.797198 kernel: rtc_cmos 00:01: registered as rtc0 May 13 00:02:24.797209 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:02:24.797256 kernel: rtc_cmos 00:01: setting system clock to 2025-05-13T00:02:24 UTC (1747094544) May 13 00:02:24.797303 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 13 00:02:24.797313 kernel: intel_pstate: CPU model not supported May 13 00:02:24.797319 kernel: NET: Registered PF_INET6 protocol family May 13 00:02:24.797337 kernel: Segment Routing with IPv6 May 13 00:02:24.797347 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:02:24.797353 kernel: NET: Registered PF_PACKET protocol family May 13 00:02:24.797359 kernel: Key type dns_resolver registered May 13 00:02:24.797366 kernel: IPI shorthand broadcast: enabled May 13 00:02:24.797373 kernel: sched_clock: Marking stable (919003959, 235659547)->(1212019894, -57356388) May 13 00:02:24.797379 kernel: registered taskstats version 1 May 13 00:02:24.797386 kernel: Loading compiled-in X.509 certificates May 13 00:02:24.797392 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 72bf95fdb9aed340290dd5f38e76c1ea0e6f32b4' May 13 00:02:24.797398 kernel: Key type .fscrypt registered May 13 00:02:24.797406 kernel: Key type fscrypt-provisioning registered May 13 00:02:24.797412 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:02:24.797419 kernel: ima: Allocated hash algorithm: sha1 May 13 00:02:24.797425 kernel: ima: No architecture policies found May 13 00:02:24.797432 kernel: clk: Disabling unused clocks May 13 00:02:24.797438 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 00:02:24.797444 kernel: Write protecting the kernel read-only data: 40960k May 13 00:02:24.797452 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 00:02:24.797459 kernel: Run /init as init process May 13 00:02:24.797466 kernel: with arguments: May 13 00:02:24.797473 kernel: /init May 13 00:02:24.797479 kernel: with environment: May 13 00:02:24.797485 kernel: HOME=/ May 13 00:02:24.797492 kernel: TERM=linux May 13 00:02:24.797498 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:02:24.797505 systemd[1]: Successfully made /usr/ read-only. May 13 00:02:24.797514 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 00:02:24.797523 systemd[1]: Detected virtualization vmware. May 13 00:02:24.797529 systemd[1]: Detected architecture x86-64. May 13 00:02:24.797535 systemd[1]: Running in initrd. May 13 00:02:24.797542 systemd[1]: No hostname configured, using default hostname. May 13 00:02:24.797549 systemd[1]: Hostname set to . May 13 00:02:24.797555 systemd[1]: Initializing machine ID from random generator. May 13 00:02:24.797562 systemd[1]: Queued start job for default target initrd.target. May 13 00:02:24.797568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:02:24.797576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:02:24.797583 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:02:24.797590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:02:24.797597 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:02:24.797604 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:02:24.797611 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:02:24.797618 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:02:24.797626 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:02:24.797632 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:02:24.797639 systemd[1]: Reached target paths.target - Path Units. May 13 00:02:24.797645 systemd[1]: Reached target slices.target - Slice Units. May 13 00:02:24.797651 systemd[1]: Reached target swap.target - Swaps. May 13 00:02:24.797658 systemd[1]: Reached target timers.target - Timer Units. May 13 00:02:24.797664 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:02:24.797671 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:02:24.797677 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:02:24.797685 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 00:02:24.797692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:02:24.797698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:02:24.797705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:02:24.797711 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:02:24.797718 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:02:24.797725 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:02:24.797731 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:02:24.797739 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:02:24.797746 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:02:24.797752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:02:24.797759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:02:24.797765 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:02:24.797786 systemd-journald[217]: Collecting audit messages is disabled. May 13 00:02:24.797805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:02:24.797812 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:02:24.797819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:02:24.797827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:02:24.797834 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:02:24.797841 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:02:24.797847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:02:24.797854 kernel: Bridge firewalling registered May 13 00:02:24.797861 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:02:24.797867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:02:24.797874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:02:24.797882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:02:24.797889 systemd-journald[217]: Journal started May 13 00:02:24.797905 systemd-journald[217]: Runtime Journal (/run/log/journal/93b6e47af54f43b98a0ca5723e492bb6) is 4.8M, max 38.6M, 33.7M free. May 13 00:02:24.754478 systemd-modules-load[219]: Inserted module 'overlay' May 13 00:02:24.799214 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:02:24.785955 systemd-modules-load[219]: Inserted module 'br_netfilter' May 13 00:02:24.803452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:02:24.804388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:02:24.806926 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:02:24.815386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:02:24.818563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:02:24.820115 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:02:24.824914 dracut-cmdline[249]: dracut-dracut-053 May 13 00:02:24.829938 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6100311d7c6d7eb9a6a36f80463a2db3a7c7060cd315301434a372d8e2ca9bd6 May 13 00:02:24.852184 systemd-resolved[251]: Positive Trust Anchors: May 13 00:02:24.852483 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:02:24.852509 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:02:24.855272 systemd-resolved[251]: Defaulting to hostname 'linux'. May 13 00:02:24.856025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:02:24.856166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:02:24.879351 kernel: SCSI subsystem initialized May 13 00:02:24.886338 kernel: Loading iSCSI transport class v2.0-870. May 13 00:02:24.893346 kernel: iscsi: registered transport (tcp) May 13 00:02:24.905641 kernel: iscsi: registered transport (qla4xxx) May 13 00:02:24.905668 kernel: QLogic iSCSI HBA Driver May 13 00:02:24.925772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:02:24.926775 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:02:24.947364 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:02:24.947409 kernel: device-mapper: uevent: version 1.0.3 May 13 00:02:24.947419 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:02:24.984344 kernel: raid6: avx2x4 gen() 45906 MB/s May 13 00:02:24.999344 kernel: raid6: avx2x2 gen() 52968 MB/s May 13 00:02:25.016828 kernel: raid6: avx2x1 gen() 39550 MB/s May 13 00:02:25.016880 kernel: raid6: using algorithm avx2x2 gen() 52968 MB/s May 13 00:02:25.034755 kernel: raid6: .... xor() 27821 MB/s, rmw enabled May 13 00:02:25.034815 kernel: raid6: using avx2x2 recovery algorithm May 13 00:02:25.050347 kernel: xor: automatically using best checksumming function avx May 13 00:02:25.147346 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:02:25.153835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:02:25.155163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:02:25.168634 systemd-udevd[435]: Using default interface naming scheme 'v255'. May 13 00:02:25.171655 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:02:25.174481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:02:25.187931 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation May 13 00:02:25.204967 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:02:25.205772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:02:25.291689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:02:25.293442 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:02:25.311836 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:02:25.312921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:02:25.313652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:02:25.314092 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:02:25.315141 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:02:25.329087 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:02:25.360746 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 13 00:02:25.360783 kernel: vmw_pvscsi: using 64bit dma May 13 00:02:25.362585 kernel: vmw_pvscsi: max_id: 16 May 13 00:02:25.362609 kernel: vmw_pvscsi: setting ring_pages to 8 May 13 00:02:25.366444 kernel: vmw_pvscsi: enabling reqCallThreshold May 13 00:02:25.366475 kernel: vmw_pvscsi: driver-based request coalescing enabled May 13 00:02:25.366485 kernel: vmw_pvscsi: using MSI-X May 13 00:02:25.370496 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 13 00:02:25.371379 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 13 00:02:25.371472 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 13 00:02:25.380333 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 13 00:02:25.382355 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 13 00:02:25.386333 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 13 00:02:25.391354 kernel: libata version 3.00 loaded. May 13 00:02:25.397358 kernel: ata_piix 0000:00:07.1: version 2.13 May 13 00:02:25.403336 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:02:25.411850 kernel: scsi host1: ata_piix May 13 00:02:25.413813 kernel: scsi host2: ata_piix May 13 00:02:25.413907 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 13 00:02:25.413917 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 13 00:02:25.416339 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 13 00:02:25.417013 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:02:25.418414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:02:25.418600 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:02:25.418713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:02:25.418740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:02:25.418906 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:02:25.421333 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:02:25.421457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:02:25.422333 kernel: AES CTR mode by8 optimization enabled May 13 00:02:25.440957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:02:25.441790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:02:25.458752 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:02:25.582349 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 13 00:02:25.585333 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 13 00:02:25.600389 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 13 00:02:25.600519 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 00:02:25.600586 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 13 00:02:25.600649 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 13 00:02:25.600709 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 13 00:02:25.617428 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 13 00:02:25.617587 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:02:25.627545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:02:25.627576 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 00:02:25.636340 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:02:25.664340 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (487) May 13 00:02:25.678342 kernel: BTRFS: device fsid d5ab0fb8-9c4f-4805-8fe7-b120550325cd devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (489) May 13 00:02:25.678520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 00:02:25.684016 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 13 00:02:25.689490 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 13 00:02:25.693910 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 13 00:02:25.694038 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 13 00:02:25.696425 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:02:26.010415 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:02:27.064343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:02:27.065030 disk-uuid[591]: The operation has completed successfully. May 13 00:02:27.172498 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:02:27.172554 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:02:27.183597 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:02:27.193376 sh[607]: Success May 13 00:02:27.202349 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 00:02:27.335609 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:02:27.338388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:02:27.354658 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:02:27.418038 kernel: BTRFS info (device dm-0): first mount of filesystem d5ab0fb8-9c4f-4805-8fe7-b120550325cd May 13 00:02:27.418080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:02:27.418100 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:02:27.419715 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:02:27.420929 kernel: BTRFS info (device dm-0): using free space tree May 13 00:02:27.433360 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 00:02:27.436099 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:02:27.437009 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 13 00:02:27.439393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:02:27.494670 kernel: BTRFS info (device sda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 00:02:27.494714 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:02:27.496809 kernel: BTRFS info (device sda6): using free space tree May 13 00:02:27.503344 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:02:27.511353 kernel: BTRFS info (device sda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 00:02:27.517548 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:02:27.519109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:02:27.574205 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 00:02:27.578429 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:02:27.642044 ignition[663]: Ignition 2.20.0 May 13 00:02:27.642051 ignition[663]: Stage: fetch-offline May 13 00:02:27.642075 ignition[663]: no configs at "/usr/lib/ignition/base.d" May 13 00:02:27.642081 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:02:27.642133 ignition[663]: parsed url from cmdline: "" May 13 00:02:27.642135 ignition[663]: no config URL provided May 13 00:02:27.642138 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:02:27.642143 ignition[663]: no config at "/usr/lib/ignition/user.ign" May 13 00:02:27.642614 ignition[663]: config successfully fetched May 13 00:02:27.642802 ignition[663]: parsing config with SHA512: 20a613f0083a68fa5d028c1e356b4b7bf7f8190e470b5e01cd5eb27ec72f1f6a470024b7831646b9210bc7a12cb6fadb8c69ed2c1ee66831f7131b96547d45b6 May 13 00:02:27.647185 unknown[663]: fetched base config from "system" May 13 00:02:27.647419 unknown[663]: fetched user config from "vmware" May 13 00:02:27.647776 ignition[663]: fetch-offline: fetch-offline passed May 13 00:02:27.647820 ignition[663]: Ignition finished successfully May 13 00:02:27.649315 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:02:27.659257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:02:27.660636 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:02:27.686536 systemd-networkd[798]: lo: Link UP May 13 00:02:27.686544 systemd-networkd[798]: lo: Gained carrier May 13 00:02:27.687417 systemd-networkd[798]: Enumeration completed May 13 00:02:27.687669 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:02:27.687699 systemd-networkd[798]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 13 00:02:27.688062 systemd[1]: Reached target network.target - Network. May 13 00:02:27.688726 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:02:27.691728 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:02:27.691868 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:02:27.690489 systemd-networkd[798]: ens192: Link UP May 13 00:02:27.690492 systemd-networkd[798]: ens192: Gained carrier May 13 00:02:27.690734 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:02:27.705766 ignition[801]: Ignition 2.20.0 May 13 00:02:27.705775 ignition[801]: Stage: kargs May 13 00:02:27.705878 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 13 00:02:27.705884 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:02:27.706419 ignition[801]: kargs: kargs passed May 13 00:02:27.706444 ignition[801]: Ignition finished successfully May 13 00:02:27.707457 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:02:27.708369 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:02:27.720099 ignition[809]: Ignition 2.20.0 May 13 00:02:27.720111 ignition[809]: Stage: disks May 13 00:02:27.720237 ignition[809]: no configs at "/usr/lib/ignition/base.d" May 13 00:02:27.720244 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:02:27.720816 ignition[809]: disks: disks passed May 13 00:02:27.720846 ignition[809]: Ignition finished successfully May 13 00:02:27.721586 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:02:27.721815 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:02:27.721943 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:02:27.722135 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:02:27.722316 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:02:27.722489 systemd[1]: Reached target basic.target - Basic System. May 13 00:02:27.723119 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:02:27.743161 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 00:02:27.744213 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:02:27.745038 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:02:27.812082 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:02:27.812337 kernel: EXT4-fs (sda9): mounted filesystem c9958eea-1ed5-48cc-be53-8e1c8ef051da r/w with ordered data mode. Quota mode: none. May 13 00:02:27.812459 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:02:27.813555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:02:27.814372 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:02:27.814774 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:02:27.814805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:02:27.814820 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:02:27.825459 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:02:27.826829 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:02:27.832357 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (825) May 13 00:02:27.836282 kernel: BTRFS info (device sda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 00:02:27.836320 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:02:27.836347 kernel: BTRFS info (device sda6): using free space tree May 13 00:02:27.845380 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:02:27.850942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:02:27.883304 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:02:27.885876 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory May 13 00:02:27.888118 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:02:27.890735 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:02:27.979757 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:02:27.980533 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:02:27.982391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:02:27.989339 kernel: BTRFS info (device sda6): last unmount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 00:02:28.006352 ignition[938]: INFO : Ignition 2.20.0 May 13 00:02:28.006352 ignition[938]: INFO : Stage: mount May 13 00:02:28.006352 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:02:28.006352 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:02:28.007576 ignition[938]: INFO : mount: mount passed May 13 00:02:28.007576 ignition[938]: INFO : Ignition finished successfully May 13 00:02:28.007802 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:02:28.008699 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:02:28.008903 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:02:28.414466 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:02:28.415404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:02:28.431341 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (951) May 13 00:02:28.433719 kernel: BTRFS info (device sda6): first mount of filesystem af1b0b29-eeab-4df3-872e-4ad99309ae06 May 13 00:02:28.433739 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:02:28.433750 kernel: BTRFS info (device sda6): using free space tree May 13 00:02:28.437342 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:02:28.438582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:02:28.452754 ignition[968]: INFO : Ignition 2.20.0 May 13 00:02:28.452754 ignition[968]: INFO : Stage: files May 13 00:02:28.453097 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:02:28.453097 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:02:28.453483 ignition[968]: DEBUG : files: compiled without relabeling support, skipping May 13 00:02:28.464843 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:02:28.464843 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:02:28.478994 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:02:28.479275 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:02:28.479497 unknown[968]: wrote ssh authorized keys file for user: core May 13 00:02:28.479763 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:02:28.481640 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:02:28.481640 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:02:29.223480 systemd-networkd[798]: ens192: Gained IPv6LL May 13 00:02:33.555718 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:02:33.702007 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:02:33.702352 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:02:33.702352 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 00:02:34.230124 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:02:34.303902 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:02:34.304141 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:02:34.304141 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:02:34.304141 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:02:34.304141 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:02:34.304141 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:02:34.304861 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 00:02:34.734477 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:02:35.004735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:02:35.004979 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:02:35.005138 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:02:35.005138 ignition[968]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 13 00:02:35.005138 ignition[968]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 13 00:02:35.005579 ignition[968]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:02:35.035035 ignition[968]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:02:35.038353 ignition[968]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:02:35.038353 ignition[968]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:02:35.038353 ignition[968]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 13 00:02:35.038353 ignition[968]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:02:35.038353 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:02:35.038353 ignition[968]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:02:35.038353 ignition[968]: INFO : files: files passed May 13 00:02:35.038353 ignition[968]: INFO : Ignition finished successfully May 13 00:02:35.039015 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:02:35.042294 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:02:35.042861 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:02:35.048739 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:02:35.048972 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:02:35.053234 initrd-setup-root-after-ignition[1000]: grep: May 13 00:02:35.053633 initrd-setup-root-after-ignition[1004]: grep: May 13 00:02:35.053633 initrd-setup-root-after-ignition[1000]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:02:35.053633 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:02:35.054122 initrd-setup-root-after-ignition[1004]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:02:35.054934 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:02:35.055163 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:02:35.055775 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:02:35.096172 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:02:35.096236 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:02:35.096582 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:02:35.096696 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:02:35.096934 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:02:35.097426 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:02:35.120059 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:02:35.120889 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:02:35.134691 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:02:35.135048 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:02:35.135320 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:02:35.135601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:02:35.135686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:02:35.136159 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:02:35.136580 systemd[1]: Stopped target basic.target - Basic System. May 13 00:02:35.136818 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:02:35.137079 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:02:35.137319 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:02:35.137593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:02:35.137861 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:02:35.138121 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:02:35.138438 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:02:35.138696 systemd[1]: Stopped target swap.target - Swaps. May 13 00:02:35.138882 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:02:35.138964 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:02:35.139439 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:02:35.139703 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:02:35.139957 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:02:35.140101 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:02:35.140367 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:02:35.140440 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:02:35.140878 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:02:35.140950 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:02:35.141352 systemd[1]: Stopped target paths.target - Path Units. May 13 00:02:35.141562 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:02:35.145369 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:02:35.145575 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:02:35.145816 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:02:35.146009 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:02:35.146070 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:02:35.146217 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:02:35.146263 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:02:35.146450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:02:35.146522 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:02:35.146775 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:02:35.146836 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:02:35.147654 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:02:35.150477 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:02:35.150622 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:02:35.150700 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:02:35.150881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:02:35.150941 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:02:35.158014 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:02:35.158076 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:02:35.166337 ignition[1024]: INFO : Ignition 2.20.0 May 13 00:02:35.166337 ignition[1024]: INFO : Stage: umount May 13 00:02:35.166732 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:02:35.166732 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:02:35.167053 ignition[1024]: INFO : umount: umount passed May 13 00:02:35.167053 ignition[1024]: INFO : Ignition finished successfully May 13 00:02:35.167639 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:02:35.167706 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:02:35.168128 systemd[1]: Stopped target network.target - Network. May 13 00:02:35.168224 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:02:35.168256 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:02:35.168418 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:02:35.168441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:02:35.168575 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:02:35.168598 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:02:35.168751 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:02:35.168773 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:02:35.168978 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:02:35.169419 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:02:35.171102 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:02:35.171455 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:02:35.171519 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:02:35.172739 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 00:02:35.173000 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:02:35.173044 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:02:35.175309 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 00:02:35.179398 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:02:35.179668 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:02:35.180675 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 00:02:35.180947 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:02:35.180968 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:02:35.181795 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:02:35.182047 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:02:35.182193 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:02:35.182524 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 13 00:02:35.182554 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 00:02:35.182795 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:02:35.182817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:02:35.183262 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:02:35.183286 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:02:35.183422 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:02:35.184061 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:02:35.188755 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:02:35.188844 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:02:35.190630 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:02:35.190674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:02:35.191176 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:02:35.191204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:02:35.191678 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:02:35.191709 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:02:35.192181 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:02:35.192299 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:02:35.192613 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:02:35.192641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:02:35.193734 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:02:35.193980 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:02:35.194120 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:02:35.194532 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:02:35.194560 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:02:35.194978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:02:35.195002 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:02:35.195353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:02:35.195379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:02:35.196005 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:02:35.196193 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:02:35.205830 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:02:35.205894 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:02:35.276497 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:02:35.276600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:02:35.277089 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:02:35.277236 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:02:35.277280 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:02:35.277964 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:02:35.287766 systemd[1]: Switching root. May 13 00:02:35.318021 systemd-journald[217]: Journal stopped May 13 00:02:36.643870 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). May 13 00:02:36.643900 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:02:36.643908 kernel: SELinux: policy capability open_perms=1 May 13 00:02:36.643914 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:02:36.643919 kernel: SELinux: policy capability always_check_network=0 May 13 00:02:36.643924 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:02:36.643932 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:02:36.643938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:02:36.643944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:02:36.643950 systemd[1]: Successfully loaded SELinux policy in 30.959ms. May 13 00:02:36.643956 kernel: audit: type=1403 audit(1747094556.073:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:02:36.643962 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.368ms. May 13 00:02:36.643969 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 00:02:36.643977 systemd[1]: Detected virtualization vmware. May 13 00:02:36.643984 systemd[1]: Detected architecture x86-64. May 13 00:02:36.643991 systemd[1]: Detected first boot. May 13 00:02:36.643998 systemd[1]: Initializing machine ID from random generator. May 13 00:02:36.644006 zram_generator::config[1068]: No configuration found. May 13 00:02:36.644093 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 13 00:02:36.644105 kernel: Guest personality initialized and is active May 13 00:02:36.644111 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 00:02:36.644117 kernel: Initialized host personality May 13 00:02:36.644123 kernel: NET: Registered PF_VSOCK protocol family May 13 00:02:36.644130 systemd[1]: Populated /etc with preset unit settings. May 13 00:02:36.644139 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:02:36.644147 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 13 00:02:36.644154 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 00:02:36.644160 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:02:36.644167 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:02:36.644173 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:02:36.644180 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:02:36.644188 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:02:36.644195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:02:36.644201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:02:36.644209 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:02:36.644215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:02:36.644222 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:02:36.644229 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:02:36.644235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:02:36.644243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:02:36.644252 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:02:36.644259 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:02:36.644265 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:02:36.644272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:02:36.644279 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:02:36.644286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:02:36.644294 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:02:36.644301 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:02:36.644307 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:02:36.644314 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:02:36.644321 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:02:36.651435 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:02:36.651448 systemd[1]: Reached target slices.target - Slice Units. May 13 00:02:36.651456 systemd[1]: Reached target swap.target - Swaps. May 13 00:02:36.651463 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:02:36.651473 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:02:36.651481 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 00:02:36.651487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:02:36.651494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:02:36.651503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:02:36.651510 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:02:36.651516 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:02:36.651523 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:02:36.651530 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:02:36.651537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:36.651544 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:02:36.651551 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:02:36.651559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:02:36.651567 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:02:36.651574 systemd[1]: Reached target machines.target - Containers. May 13 00:02:36.651581 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:02:36.651588 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 13 00:02:36.651595 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:02:36.651603 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:02:36.651610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:02:36.651617 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:02:36.651625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:02:36.651632 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:02:36.651638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:02:36.651645 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:02:36.651652 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:02:36.651659 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:02:36.651666 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:02:36.651672 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:02:36.651681 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 00:02:36.651688 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:02:36.651695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:02:36.651702 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:02:36.651709 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:02:36.651716 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 00:02:36.651723 kernel: loop: module loaded May 13 00:02:36.651729 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:02:36.651737 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:02:36.651744 systemd[1]: Stopped verity-setup.service. May 13 00:02:36.651752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:36.651759 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:02:36.651766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:02:36.651773 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:02:36.651780 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:02:36.651787 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:02:36.651794 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:02:36.651802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:02:36.651810 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:02:36.651817 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:02:36.651824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:02:36.651831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:02:36.651837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:02:36.651844 kernel: fuse: init (API version 7.39) May 13 00:02:36.651851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:02:36.651858 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:02:36.651865 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:02:36.651872 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:02:36.651879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:02:36.651885 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:02:36.651892 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:02:36.651899 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:02:36.651906 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:02:36.651913 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:02:36.651945 systemd-journald[1159]: Collecting audit messages is disabled. May 13 00:02:36.651964 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:02:36.651972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:02:36.651979 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:02:36.651987 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 00:02:36.651995 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:02:36.652003 systemd-journald[1159]: Journal started May 13 00:02:36.652024 systemd-journald[1159]: Runtime Journal (/run/log/journal/a332657eea8441f1b200fe927a57549d) is 4.8M, max 38.6M, 33.7M free. May 13 00:02:36.457431 systemd[1]: Queued start job for default target multi-user.target. May 13 00:02:36.464694 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 00:02:36.464927 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:02:36.652532 jq[1139]: true May 13 00:02:36.653048 jq[1173]: true May 13 00:02:36.667242 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:02:36.667286 kernel: ACPI: bus type drm_connector registered May 13 00:02:36.667297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:02:36.681941 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:02:36.681981 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:02:36.689744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:02:36.689776 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:02:36.696365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:02:36.716336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:02:36.720336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:02:36.723337 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:02:36.723656 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:02:36.724557 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:02:36.724681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:02:36.724980 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 00:02:36.725947 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:02:36.726119 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:02:36.726443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:02:36.726861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:02:36.745613 kernel: loop0: detected capacity change from 0 to 205544 May 13 00:02:36.745263 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:02:36.748437 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:02:36.753458 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 00:02:36.753796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:02:36.776681 ignition[1183]: Ignition 2.20.0 May 13 00:02:36.776876 ignition[1183]: deleting config from guestinfo properties May 13 00:02:36.781555 systemd-journald[1159]: Time spent on flushing to /var/log/journal/a332657eea8441f1b200fe927a57549d is 39.518ms for 1860 entries. May 13 00:02:36.781555 systemd-journald[1159]: System Journal (/var/log/journal/a332657eea8441f1b200fe927a57549d) is 8M, max 584.8M, 576.8M free. May 13 00:02:36.826683 systemd-journald[1159]: Received client request to flush runtime journal. May 13 00:02:36.826710 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:02:36.781024 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 00:02:36.792643 ignition[1183]: Successfully deleted config May 13 00:02:36.784915 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 13 00:02:36.784924 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 13 00:02:36.795641 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 13 00:02:36.795952 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:02:36.799198 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:02:36.827609 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:02:36.837576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:02:36.839368 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:02:36.855889 udevadm[1241]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:02:36.865795 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:02:36.868673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:02:36.874385 kernel: loop1: detected capacity change from 0 to 109808 May 13 00:02:36.897801 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 13 00:02:36.898289 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 13 00:02:36.902559 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:02:36.906421 kernel: loop2: detected capacity change from 0 to 151640 May 13 00:02:36.961367 kernel: loop3: detected capacity change from 0 to 2960 May 13 00:02:36.991404 kernel: loop4: detected capacity change from 0 to 205544 May 13 00:02:37.016363 kernel: loop5: detected capacity change from 0 to 109808 May 13 00:02:37.034453 kernel: loop6: detected capacity change from 0 to 151640 May 13 00:02:37.064341 kernel: loop7: detected capacity change from 0 to 2960 May 13 00:02:37.076563 (sd-merge)[1250]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 13 00:02:37.077297 (sd-merge)[1250]: Merged extensions into '/usr'. May 13 00:02:37.081809 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:02:37.081819 systemd[1]: Reloading... May 13 00:02:37.145338 zram_generator::config[1280]: No configuration found. May 13 00:02:37.232898 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:02:37.250915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:02:37.292527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:02:37.292638 systemd[1]: Reloading finished in 210 ms. May 13 00:02:37.310439 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:02:37.320446 systemd[1]: Starting ensure-sysext.service... May 13 00:02:37.323388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:02:37.328833 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:02:37.335203 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:02:37.335630 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:02:37.342372 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:02:37.343029 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:02:37.343183 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:02:37.343841 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:02:37.344038 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. May 13 00:02:37.344108 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. May 13 00:02:37.344491 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... May 13 00:02:37.344502 systemd[1]: Reloading... May 13 00:02:37.346374 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:02:37.346379 systemd-tmpfiles[1334]: Skipping /boot May 13 00:02:37.352465 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:02:37.352509 systemd-tmpfiles[1334]: Skipping /boot May 13 00:02:37.375039 systemd-udevd[1338]: Using default interface naming scheme 'v255'. May 13 00:02:37.390338 zram_generator::config[1362]: No configuration found. May 13 00:02:37.483336 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:02:37.483374 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 13 00:02:37.493342 kernel: ACPI: button: Power Button [PWRF] May 13 00:02:37.493376 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1381) May 13 00:02:37.506183 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:02:37.528467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:02:37.588675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 00:02:37.588967 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 00:02:37.589074 systemd[1]: Reloading finished in 244 ms. May 13 00:02:37.597296 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:02:37.601219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:02:37.604391 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:02:37.618002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:37.619140 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:02:37.621634 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:02:37.623701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:02:37.627322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:02:37.628484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:02:37.628653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:02:37.631038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:02:37.631146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 00:02:37.634852 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:02:37.639487 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:02:37.646854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:02:37.655271 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:02:37.655412 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:37.659003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:37.659109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:02:37.659173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 00:02:37.659239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:37.664018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:37.667533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:02:37.668423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:02:37.668493 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 00:02:37.668584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:02:37.670315 (udev-worker)[1384]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 00:02:37.673463 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:02:37.673585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:02:37.678582 systemd[1]: Finished ensure-sysext.service. May 13 00:02:37.678859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:02:37.678953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:02:37.680472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:02:37.682469 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:02:37.681677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:02:37.681772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:02:37.682062 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:02:37.682154 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:02:37.691515 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:02:37.694443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:02:37.694473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:02:37.696411 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:02:37.699462 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:02:37.700223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:02:37.739355 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:02:37.741824 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:02:37.741977 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:02:37.749714 augenrules[1500]: No rules May 13 00:02:37.751698 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:02:37.751996 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:02:37.754354 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:02:37.754691 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:02:37.757291 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:02:37.757396 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:02:37.772807 lvm[1497]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:02:37.784199 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:02:37.803385 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:02:37.815245 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:02:37.815730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:02:37.816223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:02:37.816550 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:02:37.818400 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:02:37.828101 systemd-networkd[1461]: lo: Link UP May 13 00:02:37.828105 systemd-networkd[1461]: lo: Gained carrier May 13 00:02:37.828871 systemd-networkd[1461]: Enumeration completed May 13 00:02:37.828926 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:02:37.829079 systemd-networkd[1461]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 13 00:02:37.830499 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:02:37.830617 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:02:37.832417 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 00:02:37.833771 systemd-networkd[1461]: ens192: Link UP May 13 00:02:37.833855 systemd-networkd[1461]: ens192: Gained carrier May 13 00:02:37.833916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:02:37.839010 lvm[1518]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:02:37.839591 systemd-resolved[1464]: Positive Trust Anchors: May 13 00:02:37.839597 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:02:37.839619 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:02:37.840766 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. May 13 00:02:37.844904 systemd-resolved[1464]: Defaulting to hostname 'linux'. May 13 00:02:37.846263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:02:37.846506 systemd[1]: Reached target network.target - Network. May 13 00:02:37.846665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:02:37.846817 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:02:37.847615 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:02:37.847738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:02:37.847924 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:02:37.848069 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:02:37.848170 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:02:37.848315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:02:37.848345 systemd[1]: Reached target paths.target - Path Units. May 13 00:02:37.848427 systemd[1]: Reached target timers.target - Timer Units. May 13 00:02:37.849165 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:02:37.850271 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:02:37.852717 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 00:02:37.852978 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 00:02:37.853123 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 00:02:37.855943 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:02:37.856341 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 00:02:37.856953 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 00:02:37.857191 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:02:37.857597 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:02:37.857739 systemd[1]: Reached target basic.target - Basic System. May 13 00:02:37.857889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:02:37.857907 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:02:37.858887 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:02:37.861392 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:02:37.868373 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:02:37.871295 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:02:37.871462 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:02:37.874359 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:02:37.875101 jq[1527]: false May 13 00:02:37.876729 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:02:37.878391 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:02:37.879157 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:02:37.884450 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:02:37.885012 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:02:37.888098 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:02:37.888790 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:02:37.889755 dbus-daemon[1526]: [system] SELinux support is enabled May 13 00:02:37.893033 extend-filesystems[1528]: Found loop4 May 13 00:02:37.893270 extend-filesystems[1528]: Found loop5 May 13 00:02:37.896710 extend-filesystems[1528]: Found loop6 May 13 00:02:37.896710 extend-filesystems[1528]: Found loop7 May 13 00:02:37.896710 extend-filesystems[1528]: Found sda May 13 00:02:37.896710 extend-filesystems[1528]: Found sda1 May 13 00:02:37.896710 extend-filesystems[1528]: Found sda2 May 13 00:02:37.896710 extend-filesystems[1528]: Found sda3 May 13 00:02:37.896710 extend-filesystems[1528]: Found usr May 13 00:02:37.896710 extend-filesystems[1528]: Found sda4 May 13 00:02:37.896710 extend-filesystems[1528]: Found sda6 May 13 00:02:37.896710 extend-filesystems[1528]: Found sda7 May 13 00:02:37.896710 extend-filesystems[1528]: Found sda9 May 13 00:02:37.896710 extend-filesystems[1528]: Checking size of /dev/sda9 May 13 00:02:37.893576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:02:37.901680 jq[1536]: true May 13 00:02:37.904395 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 13 00:02:37.904885 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:02:37.907069 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:02:37.908377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:02:37.908503 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:02:37.910082 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:02:37.910190 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:02:37.919084 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:02:37.919114 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:02:37.920061 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:02:37.920075 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:02:37.924591 extend-filesystems[1528]: Old size kept for /dev/sda9 May 13 00:02:37.924591 extend-filesystems[1528]: Found sr0 May 13 00:02:37.925013 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:02:37.927568 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:02:37.930431 jq[1548]: true May 13 00:02:37.934742 update_engine[1535]: I20250513 00:02:37.933382 1535 main.cc:92] Flatcar Update Engine starting May 13 00:02:37.937048 update_engine[1535]: I20250513 00:02:37.937022 1535 update_check_scheduler.cc:74] Next update check in 4m16s May 13 00:02:37.939245 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:02:37.939548 (ntainerd)[1562]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:02:37.939605 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:02:37.940735 systemd[1]: Started update-engine.service - Update Engine. May 13 00:02:37.954600 tar[1546]: linux-amd64/helm May 13 00:02:37.947987 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:02:37.957379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1384) May 13 00:02:37.958249 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 13 00:02:37.963861 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 13 00:02:38.008916 bash[1586]: Updated "/home/core/.ssh/authorized_keys" May 13 00:02:38.009221 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:02:38.009625 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:02:38.020442 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 13 00:02:38.023999 systemd-logind[1534]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:02:38.024018 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:02:38.034870 systemd-logind[1534]: New seat seat0. May 13 00:02:38.039556 unknown[1576]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 13 00:02:38.044785 unknown[1576]: Core dump limit set to -1 May 13 00:02:38.045290 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:02:38.125600 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:02:38.232948 containerd[1562]: time="2025-05-13T00:02:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 00:02:38.235562 containerd[1562]: time="2025-05-13T00:02:38.235535667Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252346688Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="4.799µs" May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252376690Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252392408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252483276Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252495909Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252512321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 00:02:38.252575 containerd[1562]: time="2025-05-13T00:02:38.252547472Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.252558133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253061135Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253073989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253081507Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253089275Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253137045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253255602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253276502Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253286233Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 00:02:38.253579 containerd[1562]: time="2025-05-13T00:02:38.253300344Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 00:02:38.254540 containerd[1562]: time="2025-05-13T00:02:38.254520438Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 00:02:38.254629 containerd[1562]: time="2025-05-13T00:02:38.254613525Z" level=info msg="metadata content store policy set" policy=shared May 13 00:02:38.257340 containerd[1562]: time="2025-05-13T00:02:38.257317041Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 00:02:38.257417 containerd[1562]: time="2025-05-13T00:02:38.257408245Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 00:02:38.257459 containerd[1562]: time="2025-05-13T00:02:38.257447271Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 00:02:38.257507 containerd[1562]: time="2025-05-13T00:02:38.257498984Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 00:02:38.257540 containerd[1562]: time="2025-05-13T00:02:38.257533373Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 00:02:38.257574 containerd[1562]: time="2025-05-13T00:02:38.257563217Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 00:02:38.257606 containerd[1562]: time="2025-05-13T00:02:38.257600169Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 00:02:38.257644 containerd[1562]: time="2025-05-13T00:02:38.257636504Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 00:02:38.257678 containerd[1562]: time="2025-05-13T00:02:38.257671424Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 00:02:38.257710 containerd[1562]: time="2025-05-13T00:02:38.257702547Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 00:02:38.257740 containerd[1562]: time="2025-05-13T00:02:38.257732622Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 00:02:38.257772 containerd[1562]: time="2025-05-13T00:02:38.257765939Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 00:02:38.257859 containerd[1562]: time="2025-05-13T00:02:38.257850838Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 00:02:38.257896 containerd[1562]: time="2025-05-13T00:02:38.257889909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 00:02:38.258111 containerd[1562]: time="2025-05-13T00:02:38.258102640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 00:02:38.258144 containerd[1562]: time="2025-05-13T00:02:38.258137935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 00:02:38.258175 containerd[1562]: time="2025-05-13T00:02:38.258169185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 00:02:38.258205 containerd[1562]: time="2025-05-13T00:02:38.258199080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 00:02:38.258350 containerd[1562]: time="2025-05-13T00:02:38.258340938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 00:02:38.258396 containerd[1562]: time="2025-05-13T00:02:38.258387858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 00:02:38.258429 containerd[1562]: time="2025-05-13T00:02:38.258422686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 00:02:38.258459 containerd[1562]: time="2025-05-13T00:02:38.258453344Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 00:02:38.258489 containerd[1562]: time="2025-05-13T00:02:38.258483208Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 00:02:38.258579 containerd[1562]: time="2025-05-13T00:02:38.258570120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 00:02:38.258812 containerd[1562]: time="2025-05-13T00:02:38.258805041Z" level=info msg="Start snapshots syncer" May 13 00:02:38.258863 containerd[1562]: time="2025-05-13T00:02:38.258855267Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 00:02:38.260068 containerd[1562]: time="2025-05-13T00:02:38.259027723Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 00:02:38.260068 containerd[1562]: time="2025-05-13T00:02:38.259060380Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259098544Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259152893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259167634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259174985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259181118Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259188775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259195024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259201026Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259213882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259221301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259226737Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259244549Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259253084Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 00:02:38.260158 containerd[1562]: time="2025-05-13T00:02:38.259258230Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259263637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259268265Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259273903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259279719Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259291817Z" level=info msg="runtime interface created" May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259295488Z" level=info msg="created NRI interface" May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259301476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259307763Z" level=info msg="Connect containerd service" May 13 00:02:38.260365 containerd[1562]: time="2025-05-13T00:02:38.259336162Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:02:38.261417 containerd[1562]: time="2025-05-13T00:02:38.261405222Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:02:38.380202 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:02:38.410272 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411388480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411421811Z" level=info msg="Start subscribing containerd event" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411425036Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411450422Z" level=info msg="Start recovering state" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411526844Z" level=info msg="Start event monitor" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411537555Z" level=info msg="Start cni network conf syncer for default" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411542907Z" level=info msg="Start streaming server" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411549875Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411554042Z" level=info msg="runtime interface starting up..." May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411556995Z" level=info msg="starting plugins..." May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411564292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 00:02:38.413341 containerd[1562]: time="2025-05-13T00:02:38.411629122Z" level=info msg="containerd successfully booted in 0.178885s" May 13 00:02:38.412515 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:02:38.412855 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:02:38.427954 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:02:38.428089 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:02:38.430260 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:02:38.442056 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:02:38.444490 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:02:38.446291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:02:38.446586 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:02:38.448383 tar[1546]: linux-amd64/LICENSE May 13 00:02:38.448431 tar[1546]: linux-amd64/README.md May 13 00:02:38.459293 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:02:39.655448 systemd-networkd[1461]: ens192: Gained IPv6LL May 13 00:02:39.655891 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. May 13 00:02:39.657239 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:02:39.657894 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:02:39.659297 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 13 00:02:39.661455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:02:39.671027 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:02:39.692794 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:02:39.696046 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:02:39.696394 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 13 00:02:39.696877 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:02:40.524185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:02:40.524645 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:02:40.525379 systemd[1]: Startup finished in 1.003s (kernel) + 11.444s (initrd) + 4.481s (userspace) = 16.930s. May 13 00:02:40.533879 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:02:40.564680 login[1679]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 00:02:40.566388 login[1680]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 00:02:40.572042 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:02:40.574167 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:02:40.580036 systemd-logind[1534]: New session 2 of user core. May 13 00:02:40.583240 systemd-logind[1534]: New session 1 of user core. May 13 00:02:40.589048 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:02:40.591803 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:02:40.599789 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:02:40.601648 systemd-logind[1534]: New session c1 of user core. May 13 00:02:40.718831 systemd[1723]: Queued start job for default target default.target. May 13 00:02:40.725247 systemd[1723]: Created slice app.slice - User Application Slice. May 13 00:02:40.725265 systemd[1723]: Reached target paths.target - Paths. May 13 00:02:40.725293 systemd[1723]: Reached target timers.target - Timers. May 13 00:02:40.727574 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:02:40.734248 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:02:40.734832 systemd[1723]: Reached target sockets.target - Sockets. May 13 00:02:40.734871 systemd[1723]: Reached target basic.target - Basic System. May 13 00:02:40.734904 systemd[1723]: Reached target default.target - Main User Target. May 13 00:02:40.734927 systemd[1723]: Startup finished in 129ms. May 13 00:02:40.735575 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:02:40.742406 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:02:40.743056 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:02:41.054638 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. May 13 00:02:41.110815 kubelet[1716]: E0513 00:02:41.110782 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:02:41.112262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:02:41.112402 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:02:41.112688 systemd[1]: kubelet.service: Consumed 616ms CPU time, 238.1M memory peak. May 13 00:02:51.252230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:02:51.253612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:02:51.713357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:02:51.722556 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:02:51.807254 kubelet[1768]: E0513 00:02:51.807214 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:02:51.809738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:02:51.809833 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:02:51.810151 systemd[1]: kubelet.service: Consumed 109ms CPU time, 96.4M memory peak. May 13 00:03:02.002270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:03:02.003482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:02.288531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:02.298590 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:03:02.329663 kubelet[1783]: E0513 00:03:02.329595 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:03:02.331415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:03:02.331534 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:03:02.331929 systemd[1]: kubelet.service: Consumed 81ms CPU time, 97.7M memory peak. May 13 00:03:08.159660 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:03:08.160942 systemd[1]: Started sshd@0-139.178.70.105:22-147.75.109.163:37382.service - OpenSSH per-connection server daemon (147.75.109.163:37382). May 13 00:03:08.211564 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 37382 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.212497 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.215842 systemd-logind[1534]: New session 3 of user core. May 13 00:03:08.223514 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:03:08.275516 systemd[1]: Started sshd@1-139.178.70.105:22-147.75.109.163:37396.service - OpenSSH per-connection server daemon (147.75.109.163:37396). May 13 00:03:08.320261 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 37396 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.321094 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.324818 systemd-logind[1534]: New session 4 of user core. May 13 00:03:08.335495 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:03:08.386443 sshd[1797]: Connection closed by 147.75.109.163 port 37396 May 13 00:03:08.385815 sshd-session[1795]: pam_unix(sshd:session): session closed for user core May 13 00:03:08.395692 systemd[1]: sshd@1-139.178.70.105:22-147.75.109.163:37396.service: Deactivated successfully. May 13 00:03:08.396742 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:03:08.397198 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. May 13 00:03:08.398480 systemd[1]: Started sshd@2-139.178.70.105:22-147.75.109.163:37402.service - OpenSSH per-connection server daemon (147.75.109.163:37402). May 13 00:03:08.399516 systemd-logind[1534]: Removed session 4. May 13 00:03:08.442858 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 37402 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.443941 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.447729 systemd-logind[1534]: New session 5 of user core. May 13 00:03:08.455563 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:03:08.502635 sshd[1805]: Connection closed by 147.75.109.163 port 37402 May 13 00:03:08.503050 sshd-session[1802]: pam_unix(sshd:session): session closed for user core May 13 00:03:08.513132 systemd[1]: sshd@2-139.178.70.105:22-147.75.109.163:37402.service: Deactivated successfully. May 13 00:03:08.514509 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:03:08.516600 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. May 13 00:03:08.517286 systemd[1]: Started sshd@3-139.178.70.105:22-147.75.109.163:37414.service - OpenSSH per-connection server daemon (147.75.109.163:37414). May 13 00:03:08.518055 systemd-logind[1534]: Removed session 5. May 13 00:03:08.557565 sshd[1810]: Accepted publickey for core from 147.75.109.163 port 37414 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.558727 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.563577 systemd-logind[1534]: New session 6 of user core. May 13 00:03:08.568501 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:03:08.618524 sshd[1813]: Connection closed by 147.75.109.163 port 37414 May 13 00:03:08.619455 sshd-session[1810]: pam_unix(sshd:session): session closed for user core May 13 00:03:08.633134 systemd[1]: sshd@3-139.178.70.105:22-147.75.109.163:37414.service: Deactivated successfully. May 13 00:03:08.634245 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:03:08.635242 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. May 13 00:03:08.636318 systemd[1]: Started sshd@4-139.178.70.105:22-147.75.109.163:37426.service - OpenSSH per-connection server daemon (147.75.109.163:37426). May 13 00:03:08.637811 systemd-logind[1534]: Removed session 6. May 13 00:03:08.679089 sshd[1818]: Accepted publickey for core from 147.75.109.163 port 37426 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.680167 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.682988 systemd-logind[1534]: New session 7 of user core. May 13 00:03:08.690492 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:03:08.747966 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:03:08.748127 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:08.759792 sudo[1822]: pam_unix(sudo:session): session closed for user root May 13 00:03:08.761031 sshd[1821]: Connection closed by 147.75.109.163 port 37426 May 13 00:03:08.760948 sshd-session[1818]: pam_unix(sshd:session): session closed for user core May 13 00:03:08.770547 systemd[1]: sshd@4-139.178.70.105:22-147.75.109.163:37426.service: Deactivated successfully. May 13 00:03:08.771521 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:03:08.772438 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. May 13 00:03:08.773244 systemd[1]: Started sshd@5-139.178.70.105:22-147.75.109.163:37438.service - OpenSSH per-connection server daemon (147.75.109.163:37438). May 13 00:03:08.774676 systemd-logind[1534]: Removed session 7. May 13 00:03:08.817304 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 37438 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.818137 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.821827 systemd-logind[1534]: New session 8 of user core. May 13 00:03:08.829420 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:03:08.878345 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:03:08.878551 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:08.881269 sudo[1832]: pam_unix(sudo:session): session closed for user root May 13 00:03:08.885135 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 00:03:08.885357 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:08.892613 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:08.923719 augenrules[1854]: No rules May 13 00:03:08.924074 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:08.924239 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:08.925278 sudo[1831]: pam_unix(sudo:session): session closed for user root May 13 00:03:08.926377 sshd[1830]: Connection closed by 147.75.109.163 port 37438 May 13 00:03:08.926308 sshd-session[1827]: pam_unix(sshd:session): session closed for user core May 13 00:03:08.940218 systemd[1]: sshd@5-139.178.70.105:22-147.75.109.163:37438.service: Deactivated successfully. May 13 00:03:08.941066 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:03:08.941816 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. May 13 00:03:08.942666 systemd[1]: Started sshd@6-139.178.70.105:22-147.75.109.163:37452.service - OpenSSH per-connection server daemon (147.75.109.163:37452). May 13 00:03:08.944484 systemd-logind[1534]: Removed session 8. May 13 00:03:08.986206 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 37452 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:03:08.986946 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:08.989825 systemd-logind[1534]: New session 9 of user core. May 13 00:03:08.996408 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:03:09.045179 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:03:09.045392 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:09.401924 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:03:09.409636 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:03:09.799743 dockerd[1883]: time="2025-05-13T00:03:09.799568202Z" level=info msg="Starting up" May 13 00:03:09.801314 dockerd[1883]: time="2025-05-13T00:03:09.801257064Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 00:03:09.817702 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport157949349-merged.mount: Deactivated successfully. May 13 00:03:09.884110 dockerd[1883]: time="2025-05-13T00:03:09.884072056Z" level=info msg="Loading containers: start." May 13 00:03:10.091523 kernel: Initializing XFRM netlink socket May 13 00:03:10.092495 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. May 13 00:03:10.136148 systemd-networkd[1461]: docker0: Link UP May 13 00:03:10.174110 dockerd[1883]: time="2025-05-13T00:03:10.174083116Z" level=info msg="Loading containers: done." May 13 00:03:10.184985 dockerd[1883]: time="2025-05-13T00:03:10.184951868Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:03:10.185096 dockerd[1883]: time="2025-05-13T00:03:10.185016684Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 00:03:10.185118 dockerd[1883]: time="2025-05-13T00:03:10.185096204Z" level=info msg="Daemon has completed initialization" May 13 00:03:10.201585 dockerd[1883]: time="2025-05-13T00:03:10.201548441Z" level=info msg="API listen on /run/docker.sock" May 13 00:03:10.201797 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:04:31.306335 systemd-resolved[1464]: Clock change detected. Flushing caches. May 13 00:04:31.306652 systemd-timesyncd[1483]: Contacted time server 172.233.153.85:123 (2.flatcar.pool.ntp.org). May 13 00:04:31.306686 systemd-timesyncd[1483]: Initial clock synchronization to Tue 2025-05-13 00:04:31.306305 UTC. May 13 00:04:31.906692 containerd[1562]: time="2025-05-13T00:04:31.906617285Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 00:04:32.459763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050239417.mount: Deactivated successfully. May 13 00:04:33.467164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 00:04:33.468899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:04:33.552234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:33.557385 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:04:33.612905 kubelet[2142]: E0513 00:04:33.612844 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:04:33.614161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:04:33.614263 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:04:33.614558 systemd[1]: kubelet.service: Consumed 88ms CPU time, 97.8M memory peak. May 13 00:04:33.863684 containerd[1562]: time="2025-05-13T00:04:33.863349126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:33.875679 containerd[1562]: time="2025-05-13T00:04:33.875631025Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 00:04:33.884871 containerd[1562]: time="2025-05-13T00:04:33.884851447Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:33.897394 containerd[1562]: time="2025-05-13T00:04:33.897355055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:33.898063 containerd[1562]: time="2025-05-13T00:04:33.897958502Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.991313547s" May 13 00:04:33.898063 containerd[1562]: time="2025-05-13T00:04:33.897982323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 00:04:33.904336 containerd[1562]: time="2025-05-13T00:04:33.904317076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 00:04:36.563226 containerd[1562]: time="2025-05-13T00:04:36.562634344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:36.570154 containerd[1562]: time="2025-05-13T00:04:36.570128479Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 00:04:36.578231 containerd[1562]: time="2025-05-13T00:04:36.578179043Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:36.588061 containerd[1562]: time="2025-05-13T00:04:36.588029012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:36.589039 containerd[1562]: time="2025-05-13T00:04:36.588786417Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.684447984s" May 13 00:04:36.589039 containerd[1562]: time="2025-05-13T00:04:36.588808153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 00:04:36.589113 containerd[1562]: time="2025-05-13T00:04:36.589099565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 00:04:38.139838 containerd[1562]: time="2025-05-13T00:04:38.139800821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:38.144298 containerd[1562]: time="2025-05-13T00:04:38.144266274Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 00:04:38.150983 containerd[1562]: time="2025-05-13T00:04:38.150948972Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:38.156284 containerd[1562]: time="2025-05-13T00:04:38.156260239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:38.157123 containerd[1562]: time="2025-05-13T00:04:38.156960627Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.567844126s" May 13 00:04:38.157123 containerd[1562]: time="2025-05-13T00:04:38.156985445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 00:04:38.157666 containerd[1562]: time="2025-05-13T00:04:38.157418418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:04:39.215914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484063117.mount: Deactivated successfully. May 13 00:04:40.172609 containerd[1562]: time="2025-05-13T00:04:40.172554578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:40.184275 containerd[1562]: time="2025-05-13T00:04:40.184141993Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 00:04:40.192176 containerd[1562]: time="2025-05-13T00:04:40.192129764Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:40.194251 containerd[1562]: time="2025-05-13T00:04:40.194218282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:40.194620 containerd[1562]: time="2025-05-13T00:04:40.194415998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.036978677s" May 13 00:04:40.194620 containerd[1562]: time="2025-05-13T00:04:40.194434352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 00:04:40.194875 containerd[1562]: time="2025-05-13T00:04:40.194858111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:04:40.755952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407626109.mount: Deactivated successfully. May 13 00:04:41.605852 containerd[1562]: time="2025-05-13T00:04:41.605767865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:41.618376 containerd[1562]: time="2025-05-13T00:04:41.618337964Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 00:04:41.628770 containerd[1562]: time="2025-05-13T00:04:41.628735296Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:41.637356 containerd[1562]: time="2025-05-13T00:04:41.637324792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:41.638373 containerd[1562]: time="2025-05-13T00:04:41.638347131Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.443441307s" May 13 00:04:41.638417 containerd[1562]: time="2025-05-13T00:04:41.638374695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:04:41.638742 containerd[1562]: time="2025-05-13T00:04:41.638721658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:04:42.789644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418046773.mount: Deactivated successfully. May 13 00:04:42.824216 containerd[1562]: time="2025-05-13T00:04:42.824090412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:04:42.830716 containerd[1562]: time="2025-05-13T00:04:42.830570490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 00:04:42.837645 containerd[1562]: time="2025-05-13T00:04:42.837594534Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:04:42.846335 containerd[1562]: time="2025-05-13T00:04:42.846304852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:04:42.846977 containerd[1562]: time="2025-05-13T00:04:42.846669157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.207924084s" May 13 00:04:42.846977 containerd[1562]: time="2025-05-13T00:04:42.846697138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:04:42.847039 containerd[1562]: time="2025-05-13T00:04:42.847020329Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 00:04:43.405938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258719946.mount: Deactivated successfully. May 13 00:04:43.717127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 00:04:43.718439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:04:44.226130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:44.233413 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:04:44.269632 kubelet[2241]: E0513 00:04:44.269601 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:04:44.270881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:04:44.270964 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:04:44.271162 systemd[1]: kubelet.service: Consumed 103ms CPU time, 96.1M memory peak. May 13 00:04:44.561037 update_engine[1535]: I20250513 00:04:44.560941 1535 update_attempter.cc:509] Updating boot flags... May 13 00:04:44.636945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2273) May 13 00:04:44.809201 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2275) May 13 00:04:48.843939 containerd[1562]: time="2025-05-13T00:04:48.843911337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:48.846612 containerd[1562]: time="2025-05-13T00:04:48.845152608Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 00:04:48.846612 containerd[1562]: time="2025-05-13T00:04:48.845650980Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:48.847217 containerd[1562]: time="2025-05-13T00:04:48.846896589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:48.847547 containerd[1562]: time="2025-05-13T00:04:48.847525231Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.000488692s" May 13 00:04:48.847579 containerd[1562]: time="2025-05-13T00:04:48.847550407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 00:04:51.258967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:51.259273 systemd[1]: kubelet.service: Consumed 103ms CPU time, 96.1M memory peak. May 13 00:04:51.260827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:04:51.277724 systemd[1]: Reload requested from client PID 2326 ('systemctl') (unit session-9.scope)... May 13 00:04:51.277735 systemd[1]: Reloading... May 13 00:04:51.351211 zram_generator::config[2374]: No configuration found. May 13 00:04:51.399732 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:04:51.417335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:04:51.481719 systemd[1]: Reloading finished in 203 ms. May 13 00:04:51.506758 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:04:51.506810 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:04:51.506980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:51.508183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:04:51.917894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:51.924457 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:04:51.982533 kubelet[2438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:04:51.982533 kubelet[2438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:04:51.982533 kubelet[2438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:04:51.982533 kubelet[2438]: I0513 00:04:51.981766 2438 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:04:52.172983 kubelet[2438]: I0513 00:04:52.172898 2438 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:04:52.172983 kubelet[2438]: I0513 00:04:52.172929 2438 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:04:52.175117 kubelet[2438]: I0513 00:04:52.174767 2438 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:04:52.198920 kubelet[2438]: I0513 00:04:52.198556 2438 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:04:52.199269 kubelet[2438]: E0513 00:04:52.199248 2438 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:52.208166 kubelet[2438]: I0513 00:04:52.208153 2438 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 00:04:52.211067 kubelet[2438]: I0513 00:04:52.210996 2438 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:04:52.211067 kubelet[2438]: I0513 00:04:52.211059 2438 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:04:52.211262 kubelet[2438]: I0513 00:04:52.211120 2438 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:04:52.211262 kubelet[2438]: I0513 00:04:52.211137 2438 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:04:52.211385 kubelet[2438]: I0513 00:04:52.211272 2438 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:04:52.211385 kubelet[2438]: I0513 00:04:52.211279 2438 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:04:52.211385 kubelet[2438]: I0513 00:04:52.211343 2438 state_mem.go:36] "Initialized new in-memory state store" May 13 00:04:52.213066 kubelet[2438]: I0513 00:04:52.213048 2438 kubelet.go:408] "Attempting to sync node with API server" May 13 00:04:52.213066 kubelet[2438]: I0513 00:04:52.213062 2438 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:04:52.214218 kubelet[2438]: I0513 00:04:52.214104 2438 kubelet.go:314] "Adding apiserver pod source" May 13 00:04:52.214218 kubelet[2438]: I0513 00:04:52.214118 2438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:04:52.218637 kubelet[2438]: W0513 00:04:52.218251 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:52.218637 kubelet[2438]: E0513 00:04:52.218294 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:52.219824 kubelet[2438]: W0513 00:04:52.219623 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:52.219824 kubelet[2438]: E0513 00:04:52.219657 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:52.219824 kubelet[2438]: I0513 00:04:52.219717 2438 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 00:04:52.221489 kubelet[2438]: I0513 00:04:52.221403 2438 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:04:52.221489 kubelet[2438]: W0513 00:04:52.221444 2438 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:04:52.222891 kubelet[2438]: I0513 00:04:52.222878 2438 server.go:1269] "Started kubelet" May 13 00:04:52.231022 kubelet[2438]: I0513 00:04:52.230984 2438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:04:52.232133 kubelet[2438]: E0513 00:04:52.230109 2438 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eed5db6e9de2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:04:52.22285265 +0000 UTC m=+0.295545315,LastTimestamp:2025-05-13 00:04:52.22285265 +0000 UTC m=+0.295545315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:04:52.236102 kubelet[2438]: I0513 00:04:52.236049 2438 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:04:52.239241 kubelet[2438]: I0513 00:04:52.238708 2438 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:04:52.239241 kubelet[2438]: E0513 00:04:52.238867 2438 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:04:52.240142 kubelet[2438]: I0513 00:04:52.240114 2438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:04:52.240390 kubelet[2438]: I0513 00:04:52.240380 2438 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:04:52.241222 kubelet[2438]: I0513 00:04:52.241115 2438 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:04:52.241222 kubelet[2438]: I0513 00:04:52.241209 2438 reconciler.go:26] "Reconciler: start to sync state" May 13 00:04:52.241331 kubelet[2438]: W0513 00:04:52.241285 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:52.241370 kubelet[2438]: E0513 00:04:52.241345 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:52.242390 kubelet[2438]: I0513 00:04:52.241621 2438 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:04:52.242765 kubelet[2438]: I0513 00:04:52.242756 2438 server.go:460] "Adding debug handlers to kubelet server" May 13 00:04:52.243470 kubelet[2438]: I0513 00:04:52.243460 2438 factory.go:221] Registration of the systemd container factory successfully May 13 00:04:52.243559 kubelet[2438]: E0513 00:04:52.243527 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" May 13 00:04:52.243559 kubelet[2438]: I0513 00:04:52.243548 2438 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:04:52.245399 kubelet[2438]: E0513 00:04:52.245386 2438 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:04:52.245972 kubelet[2438]: I0513 00:04:52.245962 2438 factory.go:221] Registration of the containerd container factory successfully May 13 00:04:52.255558 kubelet[2438]: I0513 00:04:52.255522 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:04:52.256666 kubelet[2438]: I0513 00:04:52.256451 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:04:52.256666 kubelet[2438]: I0513 00:04:52.256467 2438 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:04:52.256666 kubelet[2438]: I0513 00:04:52.256484 2438 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:04:52.256666 kubelet[2438]: E0513 00:04:52.256510 2438 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:04:52.263075 kubelet[2438]: W0513 00:04:52.263037 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:52.263220 kubelet[2438]: E0513 00:04:52.263208 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:52.267365 kubelet[2438]: I0513 00:04:52.267357 2438 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:04:52.267438 kubelet[2438]: I0513 00:04:52.267432 2438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:04:52.267474 kubelet[2438]: I0513 00:04:52.267470 2438 state_mem.go:36] "Initialized new in-memory state store" May 13 00:04:52.268365 kubelet[2438]: I0513 00:04:52.268357 2438 policy_none.go:49] "None policy: Start" May 13 00:04:52.268713 kubelet[2438]: I0513 00:04:52.268700 2438 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:04:52.268803 kubelet[2438]: I0513 00:04:52.268724 2438 state_mem.go:35] "Initializing new in-memory state store" May 13 00:04:52.276838 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:04:52.288742 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:04:52.290740 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:04:52.297638 kubelet[2438]: I0513 00:04:52.297624 2438 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:04:52.297737 kubelet[2438]: I0513 00:04:52.297728 2438 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:04:52.297766 kubelet[2438]: I0513 00:04:52.297735 2438 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:04:52.298280 kubelet[2438]: I0513 00:04:52.297964 2438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:04:52.299174 kubelet[2438]: E0513 00:04:52.299155 2438 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:04:52.366242 systemd[1]: Created slice kubepods-burstable-podb9b1aeb23f44246e64717b06cb96ae36.slice - libcontainer container kubepods-burstable-podb9b1aeb23f44246e64717b06cb96ae36.slice. May 13 00:04:52.383458 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 00:04:52.386221 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 00:04:52.398972 kubelet[2438]: I0513 00:04:52.398953 2438 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:04:52.399172 kubelet[2438]: E0513 00:04:52.399160 2438 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 13 00:04:52.441731 kubelet[2438]: I0513 00:04:52.441591 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9b1aeb23f44246e64717b06cb96ae36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9b1aeb23f44246e64717b06cb96ae36\") " pod="kube-system/kube-apiserver-localhost" May 13 00:04:52.441731 kubelet[2438]: I0513 00:04:52.441608 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9b1aeb23f44246e64717b06cb96ae36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9b1aeb23f44246e64717b06cb96ae36\") " pod="kube-system/kube-apiserver-localhost" May 13 00:04:52.441731 kubelet[2438]: I0513 00:04:52.441618 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:52.441731 kubelet[2438]: I0513 00:04:52.441627 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:52.441731 kubelet[2438]: I0513 00:04:52.441637 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:52.441840 kubelet[2438]: I0513 00:04:52.441656 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:04:52.441840 kubelet[2438]: I0513 00:04:52.441665 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9b1aeb23f44246e64717b06cb96ae36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9b1aeb23f44246e64717b06cb96ae36\") " pod="kube-system/kube-apiserver-localhost" May 13 00:04:52.441840 kubelet[2438]: I0513 00:04:52.441673 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:52.441840 kubelet[2438]: I0513 00:04:52.441682 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:52.443783 kubelet[2438]: E0513 00:04:52.443762 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" May 13 00:04:52.600212 kubelet[2438]: I0513 00:04:52.600093 2438 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:04:52.600293 kubelet[2438]: E0513 00:04:52.600282 2438 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 13 00:04:52.682372 containerd[1562]: time="2025-05-13T00:04:52.682346235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9b1aeb23f44246e64717b06cb96ae36,Namespace:kube-system,Attempt:0,}" May 13 00:04:52.685877 containerd[1562]: time="2025-05-13T00:04:52.685662517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 00:04:52.692998 containerd[1562]: time="2025-05-13T00:04:52.692815102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 00:04:52.745207 containerd[1562]: time="2025-05-13T00:04:52.744776670Z" level=info msg="connecting to shim 826e36b24f2aac6460e79324ff5f744cc13642386916c6ef8c5fbc4e84fd81c9" address="unix:///run/containerd/s/6e888e697a57f239d6c474eaafa42be388fbf10834e3786678e0a1cf068c6133" namespace=k8s.io protocol=ttrpc version=3 May 13 00:04:52.750392 containerd[1562]: time="2025-05-13T00:04:52.750372854Z" level=info msg="connecting to shim 4a47be3be1fdd5311f218754c88d4fe0770da9170275359fabbe4b581d738501" address="unix:///run/containerd/s/5b5c8c0750b615a5f97aa51bacbe6052e4a39694a5646c1f8b0b2db1d090909a" namespace=k8s.io protocol=ttrpc version=3 May 13 00:04:52.750820 containerd[1562]: time="2025-05-13T00:04:52.750807228Z" level=info msg="connecting to shim ad449861eb3524be8272a05c5d501d18665754907c0a66bb6111207d46a20607" address="unix:///run/containerd/s/e4689319488e8153b581d24cea32b58715dc61ba09fb1b7c8efbdc037c299e01" namespace=k8s.io protocol=ttrpc version=3 May 13 00:04:52.824274 systemd[1]: Started cri-containerd-4a47be3be1fdd5311f218754c88d4fe0770da9170275359fabbe4b581d738501.scope - libcontainer container 4a47be3be1fdd5311f218754c88d4fe0770da9170275359fabbe4b581d738501. May 13 00:04:52.825133 systemd[1]: Started cri-containerd-826e36b24f2aac6460e79324ff5f744cc13642386916c6ef8c5fbc4e84fd81c9.scope - libcontainer container 826e36b24f2aac6460e79324ff5f744cc13642386916c6ef8c5fbc4e84fd81c9. May 13 00:04:52.825926 systemd[1]: Started cri-containerd-ad449861eb3524be8272a05c5d501d18665754907c0a66bb6111207d46a20607.scope - libcontainer container ad449861eb3524be8272a05c5d501d18665754907c0a66bb6111207d46a20607. May 13 00:04:52.851198 kubelet[2438]: E0513 00:04:52.850680 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" May 13 00:04:52.893334 containerd[1562]: time="2025-05-13T00:04:52.893309505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9b1aeb23f44246e64717b06cb96ae36,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a47be3be1fdd5311f218754c88d4fe0770da9170275359fabbe4b581d738501\"" May 13 00:04:52.895461 containerd[1562]: time="2025-05-13T00:04:52.895391552Z" level=info msg="CreateContainer within sandbox \"4a47be3be1fdd5311f218754c88d4fe0770da9170275359fabbe4b581d738501\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:04:53.001687 kubelet[2438]: I0513 00:04:53.001670 2438 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:04:53.005168 kubelet[2438]: E0513 00:04:53.001954 2438 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 13 00:04:53.012914 containerd[1562]: time="2025-05-13T00:04:53.012853759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"826e36b24f2aac6460e79324ff5f744cc13642386916c6ef8c5fbc4e84fd81c9\"" May 13 00:04:53.017262 containerd[1562]: time="2025-05-13T00:04:53.013851882Z" level=info msg="CreateContainer within sandbox \"826e36b24f2aac6460e79324ff5f744cc13642386916c6ef8c5fbc4e84fd81c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:04:53.026300 containerd[1562]: time="2025-05-13T00:04:53.026282401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad449861eb3524be8272a05c5d501d18665754907c0a66bb6111207d46a20607\"" May 13 00:04:53.031046 containerd[1562]: time="2025-05-13T00:04:53.027216181Z" level=info msg="CreateContainer within sandbox \"ad449861eb3524be8272a05c5d501d18665754907c0a66bb6111207d46a20607\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:04:53.038624 containerd[1562]: time="2025-05-13T00:04:53.038606599Z" level=info msg="Container 6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d: CDI devices from CRI Config.CDIDevices: []" May 13 00:04:53.042648 containerd[1562]: time="2025-05-13T00:04:53.042609545Z" level=info msg="Container 9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678: CDI devices from CRI Config.CDIDevices: []" May 13 00:04:53.043180 containerd[1562]: time="2025-05-13T00:04:53.043168736Z" level=info msg="Container 14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad: CDI devices from CRI Config.CDIDevices: []" May 13 00:04:53.045352 containerd[1562]: time="2025-05-13T00:04:53.045335054Z" level=info msg="CreateContainer within sandbox \"ad449861eb3524be8272a05c5d501d18665754907c0a66bb6111207d46a20607\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678\"" May 13 00:04:53.046076 containerd[1562]: time="2025-05-13T00:04:53.046050090Z" level=info msg="StartContainer for \"9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678\"" May 13 00:04:53.046587 containerd[1562]: time="2025-05-13T00:04:53.046506986Z" level=info msg="CreateContainer within sandbox \"4a47be3be1fdd5311f218754c88d4fe0770da9170275359fabbe4b581d738501\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d\"" May 13 00:04:53.046824 containerd[1562]: time="2025-05-13T00:04:53.046812903Z" level=info msg="connecting to shim 9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678" address="unix:///run/containerd/s/e4689319488e8153b581d24cea32b58715dc61ba09fb1b7c8efbdc037c299e01" protocol=ttrpc version=3 May 13 00:04:53.047326 containerd[1562]: time="2025-05-13T00:04:53.047315484Z" level=info msg="StartContainer for \"6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d\"" May 13 00:04:53.047825 containerd[1562]: time="2025-05-13T00:04:53.047790331Z" level=info msg="CreateContainer within sandbox \"826e36b24f2aac6460e79324ff5f744cc13642386916c6ef8c5fbc4e84fd81c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad\"" May 13 00:04:53.048037 containerd[1562]: time="2025-05-13T00:04:53.048023614Z" level=info msg="StartContainer for \"14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad\"" May 13 00:04:53.048532 containerd[1562]: time="2025-05-13T00:04:53.048520390Z" level=info msg="connecting to shim 6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d" address="unix:///run/containerd/s/5b5c8c0750b615a5f97aa51bacbe6052e4a39694a5646c1f8b0b2db1d090909a" protocol=ttrpc version=3 May 13 00:04:53.048624 containerd[1562]: time="2025-05-13T00:04:53.048547226Z" level=info msg="connecting to shim 14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad" address="unix:///run/containerd/s/6e888e697a57f239d6c474eaafa42be388fbf10834e3786678e0a1cf068c6133" protocol=ttrpc version=3 May 13 00:04:53.061275 systemd[1]: Started cri-containerd-9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678.scope - libcontainer container 9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678. May 13 00:04:53.064343 systemd[1]: Started cri-containerd-14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad.scope - libcontainer container 14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad. May 13 00:04:53.067439 systemd[1]: Started cri-containerd-6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d.scope - libcontainer container 6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d. May 13 00:04:53.116582 containerd[1562]: time="2025-05-13T00:04:53.116536635Z" level=info msg="StartContainer for \"9333c074de5e7bd59109097974abc31ee385dcb4ea405fe0a1e1a9161afc0678\" returns successfully" May 13 00:04:53.118592 containerd[1562]: time="2025-05-13T00:04:53.118552782Z" level=info msg="StartContainer for \"6f29e6b6bb088fd444c7d60c8bc819c571cfdd65c931ab5d1d95b89a712db40d\" returns successfully" May 13 00:04:53.125885 containerd[1562]: time="2025-05-13T00:04:53.125847433Z" level=info msg="StartContainer for \"14e637b8aa247e2273c3911df0adae3e54c50a54a762c4abd65f98c5e5a7bfad\" returns successfully" May 13 00:04:53.420658 kubelet[2438]: W0513 00:04:53.420577 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:53.420658 kubelet[2438]: E0513 00:04:53.420620 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:53.651930 kubelet[2438]: E0513 00:04:53.651899 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" May 13 00:04:53.675476 kubelet[2438]: W0513 00:04:53.675395 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:53.675476 kubelet[2438]: E0513 00:04:53.675437 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:53.697918 kubelet[2438]: W0513 00:04:53.697882 2438 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 13 00:04:53.697999 kubelet[2438]: E0513 00:04:53.697932 2438 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 13 00:04:53.803825 kubelet[2438]: I0513 00:04:53.803805 2438 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:04:54.748772 kubelet[2438]: I0513 00:04:54.748742 2438 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:04:55.220580 kubelet[2438]: I0513 00:04:55.220427 2438 apiserver.go:52] "Watching apiserver" May 13 00:04:55.242256 kubelet[2438]: I0513 00:04:55.242239 2438 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:04:56.438754 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-9.scope)... May 13 00:04:56.438764 systemd[1]: Reloading... May 13 00:04:56.496203 zram_generator::config[2746]: No configuration found. May 13 00:04:56.559578 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:04:56.577492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:04:56.651114 systemd[1]: Reloading finished in 212 ms. May 13 00:04:56.669675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:04:56.683029 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:04:56.683284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:56.683372 systemd[1]: kubelet.service: Consumed 457ms CPU time, 116.8M memory peak. May 13 00:04:56.687340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:04:56.864742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:04:56.873383 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:04:56.933633 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:04:56.933633 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:04:56.933633 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:04:56.933889 kubelet[2813]: I0513 00:04:56.933641 2813 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:04:56.937238 kubelet[2813]: I0513 00:04:56.937224 2813 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:04:56.937238 kubelet[2813]: I0513 00:04:56.937235 2813 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:04:56.976388 kubelet[2813]: I0513 00:04:56.976225 2813 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:04:56.986258 kubelet[2813]: I0513 00:04:56.986231 2813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:04:56.996597 kubelet[2813]: I0513 00:04:56.996559 2813 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:04:56.999137 kubelet[2813]: I0513 00:04:56.999121 2813 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 00:04:57.001289 kubelet[2813]: I0513 00:04:57.001276 2813 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:04:57.022848 kubelet[2813]: I0513 00:04:57.022823 2813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:04:57.022963 kubelet[2813]: I0513 00:04:57.022929 2813 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:04:57.023102 kubelet[2813]: I0513 00:04:57.022959 2813 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:04:57.023102 kubelet[2813]: I0513 00:04:57.023101 2813 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:04:57.023217 kubelet[2813]: I0513 00:04:57.023110 2813 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:04:57.023217 kubelet[2813]: I0513 00:04:57.023138 2813 state_mem.go:36] "Initialized new in-memory state store" May 13 00:04:57.030357 kubelet[2813]: I0513 00:04:57.030285 2813 kubelet.go:408] "Attempting to sync node with API server" May 13 00:04:57.030357 kubelet[2813]: I0513 00:04:57.030304 2813 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:04:57.030357 kubelet[2813]: I0513 00:04:57.030327 2813 kubelet.go:314] "Adding apiserver pod source" May 13 00:04:57.030947 kubelet[2813]: I0513 00:04:57.030338 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:04:57.041687 kubelet[2813]: I0513 00:04:57.041671 2813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 00:04:57.043845 kubelet[2813]: I0513 00:04:57.043834 2813 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:04:57.044280 kubelet[2813]: I0513 00:04:57.044271 2813 server.go:1269] "Started kubelet" May 13 00:04:57.048312 kubelet[2813]: I0513 00:04:57.048272 2813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:04:57.048666 kubelet[2813]: I0513 00:04:57.048651 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:04:57.049247 kubelet[2813]: I0513 00:04:57.049238 2813 server.go:460] "Adding debug handlers to kubelet server" May 13 00:04:57.050182 kubelet[2813]: I0513 00:04:57.049707 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:04:57.050182 kubelet[2813]: I0513 00:04:57.049835 2813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:04:57.050182 kubelet[2813]: I0513 00:04:57.049970 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:04:57.050917 kubelet[2813]: I0513 00:04:57.050505 2813 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:04:57.052928 kubelet[2813]: I0513 00:04:57.052910 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:04:57.053030 kubelet[2813]: I0513 00:04:57.053019 2813 reconciler.go:26] "Reconciler: start to sync state" May 13 00:04:57.054399 kubelet[2813]: I0513 00:04:57.054313 2813 factory.go:221] Registration of the systemd container factory successfully May 13 00:04:57.054524 kubelet[2813]: I0513 00:04:57.054449 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:04:57.058847 kubelet[2813]: I0513 00:04:57.058830 2813 factory.go:221] Registration of the containerd container factory successfully May 13 00:04:57.059119 kubelet[2813]: I0513 00:04:57.059104 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:04:57.060073 kubelet[2813]: I0513 00:04:57.060064 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:04:57.060131 kubelet[2813]: I0513 00:04:57.060126 2813 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:04:57.060178 kubelet[2813]: I0513 00:04:57.060173 2813 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:04:57.060391 kubelet[2813]: E0513 00:04:57.060249 2813 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:04:57.095245 kubelet[2813]: I0513 00:04:57.095231 2813 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:04:57.095373 kubelet[2813]: I0513 00:04:57.095364 2813 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:04:57.095507 kubelet[2813]: I0513 00:04:57.095416 2813 state_mem.go:36] "Initialized new in-memory state store" May 13 00:04:57.095579 kubelet[2813]: I0513 00:04:57.095570 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:04:57.095631 kubelet[2813]: I0513 00:04:57.095609 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:04:57.095664 kubelet[2813]: I0513 00:04:57.095660 2813 policy_none.go:49] "None policy: Start" May 13 00:04:57.096031 kubelet[2813]: I0513 00:04:57.096019 2813 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:04:57.096064 kubelet[2813]: I0513 00:04:57.096034 2813 state_mem.go:35] "Initializing new in-memory state store" May 13 00:04:57.096219 kubelet[2813]: I0513 00:04:57.096164 2813 state_mem.go:75] "Updated machine memory state" May 13 00:04:57.098679 kubelet[2813]: I0513 00:04:57.098669 2813 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:04:57.099099 kubelet[2813]: I0513 00:04:57.099033 2813 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:04:57.099099 kubelet[2813]: I0513 00:04:57.099043 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:04:57.100251 kubelet[2813]: I0513 00:04:57.099719 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:04:57.202769 kubelet[2813]: I0513 00:04:57.202707 2813 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:04:57.206832 kubelet[2813]: I0513 00:04:57.206809 2813 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 00:04:57.207064 kubelet[2813]: I0513 00:04:57.206870 2813 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:04:57.254760 kubelet[2813]: I0513 00:04:57.254726 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:57.254760 kubelet[2813]: I0513 00:04:57.254764 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:57.355530 kubelet[2813]: I0513 00:04:57.355466 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:57.355646 kubelet[2813]: I0513 00:04:57.355556 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9b1aeb23f44246e64717b06cb96ae36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9b1aeb23f44246e64717b06cb96ae36\") " pod="kube-system/kube-apiserver-localhost" May 13 00:04:57.355646 kubelet[2813]: I0513 00:04:57.355573 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9b1aeb23f44246e64717b06cb96ae36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9b1aeb23f44246e64717b06cb96ae36\") " pod="kube-system/kube-apiserver-localhost" May 13 00:04:57.355646 kubelet[2813]: I0513 00:04:57.355617 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:57.355646 kubelet[2813]: I0513 00:04:57.355640 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:04:57.355767 kubelet[2813]: I0513 00:04:57.355675 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9b1aeb23f44246e64717b06cb96ae36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9b1aeb23f44246e64717b06cb96ae36\") " pod="kube-system/kube-apiserver-localhost" May 13 00:04:57.355767 kubelet[2813]: I0513 00:04:57.355713 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:04:57.459626 sudo[2844]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:04:57.459827 sudo[2844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 00:04:57.847738 sudo[2844]: pam_unix(sudo:session): session closed for user root May 13 00:04:58.031522 kubelet[2813]: I0513 00:04:58.031007 2813 apiserver.go:52] "Watching apiserver" May 13 00:04:58.053203 kubelet[2813]: I0513 00:04:58.053154 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:04:58.131180 kubelet[2813]: I0513 00:04:58.130831 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.130818885 podStartE2EDuration="1.130818885s" podCreationTimestamp="2025-05-13 00:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:04:58.110826366 +0000 UTC m=+1.227341677" watchObservedRunningTime="2025-05-13 00:04:58.130818885 +0000 UTC m=+1.247334194" May 13 00:04:58.152937 kubelet[2813]: I0513 00:04:58.152826 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.152814702 podStartE2EDuration="1.152814702s" podCreationTimestamp="2025-05-13 00:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:04:58.133255571 +0000 UTC m=+1.249770883" watchObservedRunningTime="2025-05-13 00:04:58.152814702 +0000 UTC m=+1.269330015" May 13 00:04:58.162846 kubelet[2813]: I0513 00:04:58.162808 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.162796365 podStartE2EDuration="1.162796365s" podCreationTimestamp="2025-05-13 00:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:04:58.153320602 +0000 UTC m=+1.269835913" watchObservedRunningTime="2025-05-13 00:04:58.162796365 +0000 UTC m=+1.279311676" May 13 00:04:59.160671 sudo[1866]: pam_unix(sudo:session): session closed for user root May 13 00:04:59.161396 sshd[1865]: Connection closed by 147.75.109.163 port 37452 May 13 00:04:59.161922 sshd-session[1862]: pam_unix(sshd:session): session closed for user core May 13 00:04:59.164109 systemd[1]: sshd@6-139.178.70.105:22-147.75.109.163:37452.service: Deactivated successfully. May 13 00:04:59.165509 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:04:59.165692 systemd[1]: session-9.scope: Consumed 3.403s CPU time, 208.7M memory peak. May 13 00:04:59.166797 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. May 13 00:04:59.167520 systemd-logind[1534]: Removed session 9. May 13 00:05:02.607731 kubelet[2813]: I0513 00:05:02.607708 2813 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:05:02.608122 containerd[1562]: time="2025-05-13T00:05:02.607896043Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:05:02.608700 kubelet[2813]: I0513 00:05:02.608311 2813 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:05:03.716605 systemd[1]: Created slice kubepods-besteffort-podf6e0cc24_be47_41ce_8626_094c2ee3597d.slice - libcontainer container kubepods-besteffort-podf6e0cc24_be47_41ce_8626_094c2ee3597d.slice. May 13 00:05:03.750255 kubelet[2813]: W0513 00:05:03.749383 2813 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 00:05:03.751261 kubelet[2813]: E0513 00:05:03.751220 2813 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 00:05:03.751650 systemd[1]: Created slice kubepods-besteffort-podb9876e21_82fc_49eb_a253_a6b305af38c0.slice - libcontainer container kubepods-besteffort-podb9876e21_82fc_49eb_a253_a6b305af38c0.slice. May 13 00:05:03.767474 systemd[1]: Created slice kubepods-burstable-pod251c62c6_ea0d_41f0_b167_04c80856640e.slice - libcontainer container kubepods-burstable-pod251c62c6_ea0d_41f0_b167_04c80856640e.slice. May 13 00:05:03.799539 kubelet[2813]: I0513 00:05:03.799503 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6e0cc24-be47-41ce-8626-094c2ee3597d-cilium-config-path\") pod \"cilium-operator-5d85765b45-l6d6t\" (UID: \"f6e0cc24-be47-41ce-8626-094c2ee3597d\") " pod="kube-system/cilium-operator-5d85765b45-l6d6t" May 13 00:05:03.799539 kubelet[2813]: I0513 00:05:03.799530 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9876e21-82fc-49eb-a253-a6b305af38c0-kube-proxy\") pod \"kube-proxy-dvftd\" (UID: \"b9876e21-82fc-49eb-a253-a6b305af38c0\") " pod="kube-system/kube-proxy-dvftd" May 13 00:05:03.799539 kubelet[2813]: I0513 00:05:03.799543 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-kernel\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799670 kubelet[2813]: I0513 00:05:03.799554 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-net\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799670 kubelet[2813]: I0513 00:05:03.799564 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9876e21-82fc-49eb-a253-a6b305af38c0-xtables-lock\") pod \"kube-proxy-dvftd\" (UID: \"b9876e21-82fc-49eb-a253-a6b305af38c0\") " pod="kube-system/kube-proxy-dvftd" May 13 00:05:03.799670 kubelet[2813]: I0513 00:05:03.799573 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6r7\" (UniqueName: \"kubernetes.io/projected/b9876e21-82fc-49eb-a253-a6b305af38c0-kube-api-access-jp6r7\") pod \"kube-proxy-dvftd\" (UID: \"b9876e21-82fc-49eb-a253-a6b305af38c0\") " pod="kube-system/kube-proxy-dvftd" May 13 00:05:03.799670 kubelet[2813]: I0513 00:05:03.799582 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-etc-cni-netd\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799670 kubelet[2813]: I0513 00:05:03.799592 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-run\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799670 kubelet[2813]: I0513 00:05:03.799606 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-bpf-maps\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799799 kubelet[2813]: I0513 00:05:03.799616 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-cgroup\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799799 kubelet[2813]: I0513 00:05:03.799625 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-lib-modules\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799799 kubelet[2813]: I0513 00:05:03.799633 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-hubble-tls\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799799 kubelet[2813]: I0513 00:05:03.799653 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgngz\" (UniqueName: \"kubernetes.io/projected/f6e0cc24-be47-41ce-8626-094c2ee3597d-kube-api-access-hgngz\") pod \"cilium-operator-5d85765b45-l6d6t\" (UID: \"f6e0cc24-be47-41ce-8626-094c2ee3597d\") " pod="kube-system/cilium-operator-5d85765b45-l6d6t" May 13 00:05:03.799799 kubelet[2813]: I0513 00:05:03.799665 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-hostproc\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799889 kubelet[2813]: I0513 00:05:03.799677 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-xtables-lock\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799889 kubelet[2813]: I0513 00:05:03.799686 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-config-path\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799889 kubelet[2813]: I0513 00:05:03.799695 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9876e21-82fc-49eb-a253-a6b305af38c0-lib-modules\") pod \"kube-proxy-dvftd\" (UID: \"b9876e21-82fc-49eb-a253-a6b305af38c0\") " pod="kube-system/kube-proxy-dvftd" May 13 00:05:03.799889 kubelet[2813]: I0513 00:05:03.799705 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cni-path\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799889 kubelet[2813]: I0513 00:05:03.799721 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/251c62c6-ea0d-41f0-b167-04c80856640e-clustermesh-secrets\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:03.799889 kubelet[2813]: I0513 00:05:03.799730 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65smt\" (UniqueName: \"kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-kube-api-access-65smt\") pod \"cilium-x5fsv\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " pod="kube-system/cilium-x5fsv" May 13 00:05:04.022271 containerd[1562]: time="2025-05-13T00:05:04.022105707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l6d6t,Uid:f6e0cc24-be47-41ce-8626-094c2ee3597d,Namespace:kube-system,Attempt:0,}" May 13 00:05:04.034824 containerd[1562]: time="2025-05-13T00:05:04.034730363Z" level=info msg="connecting to shim daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603" address="unix:///run/containerd/s/e2867c5029877d487ca20a664c35eafe897b8cd3bb963277d821813bf1937da1" namespace=k8s.io protocol=ttrpc version=3 May 13 00:05:04.054291 systemd[1]: Started cri-containerd-daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603.scope - libcontainer container daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603. May 13 00:05:04.070179 containerd[1562]: time="2025-05-13T00:05:04.069879188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5fsv,Uid:251c62c6-ea0d-41f0-b167-04c80856640e,Namespace:kube-system,Attempt:0,}" May 13 00:05:04.108654 containerd[1562]: time="2025-05-13T00:05:04.108629453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l6d6t,Uid:f6e0cc24-be47-41ce-8626-094c2ee3597d,Namespace:kube-system,Attempt:0,} returns sandbox id \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\"" May 13 00:05:04.109822 containerd[1562]: time="2025-05-13T00:05:04.109776173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:05:04.132251 containerd[1562]: time="2025-05-13T00:05:04.131969085Z" level=info msg="connecting to shim 858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae" address="unix:///run/containerd/s/7264f4228233d45f4c8e6e6999c1c9d8e7cb0a16e49343eacd73ee4cd834248c" namespace=k8s.io protocol=ttrpc version=3 May 13 00:05:04.158392 systemd[1]: Started cri-containerd-858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae.scope - libcontainer container 858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae. May 13 00:05:04.207477 containerd[1562]: time="2025-05-13T00:05:04.207425644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5fsv,Uid:251c62c6-ea0d-41f0-b167-04c80856640e,Namespace:kube-system,Attempt:0,} returns sandbox id \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\"" May 13 00:05:04.956654 containerd[1562]: time="2025-05-13T00:05:04.956529809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvftd,Uid:b9876e21-82fc-49eb-a253-a6b305af38c0,Namespace:kube-system,Attempt:0,}" May 13 00:05:04.968421 containerd[1562]: time="2025-05-13T00:05:04.968370108Z" level=info msg="connecting to shim 7bb5a0f49d0e8c458344e78ae2bf6bbe6e3f811fc6e02cb785305c4ff9836a19" address="unix:///run/containerd/s/67904976c691a67ddb0e1a4ae64a955424664bc5e632690832466b835bf67b4a" namespace=k8s.io protocol=ttrpc version=3 May 13 00:05:04.988418 systemd[1]: Started cri-containerd-7bb5a0f49d0e8c458344e78ae2bf6bbe6e3f811fc6e02cb785305c4ff9836a19.scope - libcontainer container 7bb5a0f49d0e8c458344e78ae2bf6bbe6e3f811fc6e02cb785305c4ff9836a19. May 13 00:05:05.010859 containerd[1562]: time="2025-05-13T00:05:05.010786260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvftd,Uid:b9876e21-82fc-49eb-a253-a6b305af38c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb5a0f49d0e8c458344e78ae2bf6bbe6e3f811fc6e02cb785305c4ff9836a19\"" May 13 00:05:05.013642 containerd[1562]: time="2025-05-13T00:05:05.013545093Z" level=info msg="CreateContainer within sandbox \"7bb5a0f49d0e8c458344e78ae2bf6bbe6e3f811fc6e02cb785305c4ff9836a19\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:05:05.029766 containerd[1562]: time="2025-05-13T00:05:05.029340353Z" level=info msg="Container e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:05.031800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340781402.mount: Deactivated successfully. May 13 00:05:05.035635 containerd[1562]: time="2025-05-13T00:05:05.035587237Z" level=info msg="CreateContainer within sandbox \"7bb5a0f49d0e8c458344e78ae2bf6bbe6e3f811fc6e02cb785305c4ff9836a19\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30\"" May 13 00:05:05.037544 containerd[1562]: time="2025-05-13T00:05:05.036035184Z" level=info msg="StartContainer for \"e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30\"" May 13 00:05:05.037544 containerd[1562]: time="2025-05-13T00:05:05.037007911Z" level=info msg="connecting to shim e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30" address="unix:///run/containerd/s/67904976c691a67ddb0e1a4ae64a955424664bc5e632690832466b835bf67b4a" protocol=ttrpc version=3 May 13 00:05:05.059399 systemd[1]: Started cri-containerd-e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30.scope - libcontainer container e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30. May 13 00:05:05.097980 containerd[1562]: time="2025-05-13T00:05:05.097960405Z" level=info msg="StartContainer for \"e8dab24a9b3221f847bf61059bd5268bb792f55722626d60f2fa1cf61d4dbb30\" returns successfully" May 13 00:05:05.125857 kubelet[2813]: I0513 00:05:05.125823 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvftd" podStartSLOduration=2.1258120209999998 podStartE2EDuration="2.125812021s" podCreationTimestamp="2025-05-13 00:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:05:05.125667585 +0000 UTC m=+8.242182898" watchObservedRunningTime="2025-05-13 00:05:05.125812021 +0000 UTC m=+8.242327334" May 13 00:05:06.789471 containerd[1562]: time="2025-05-13T00:05:06.789430811Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:05:06.790149 containerd[1562]: time="2025-05-13T00:05:06.789944037Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 00:05:06.790850 containerd[1562]: time="2025-05-13T00:05:06.790520726Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:05:06.791778 containerd[1562]: time="2025-05-13T00:05:06.791375406Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.681579442s" May 13 00:05:06.791778 containerd[1562]: time="2025-05-13T00:05:06.791401574Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:05:06.796437 containerd[1562]: time="2025-05-13T00:05:06.796184343Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:05:06.797576 containerd[1562]: time="2025-05-13T00:05:06.797333681Z" level=info msg="CreateContainer within sandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:05:06.804252 containerd[1562]: time="2025-05-13T00:05:06.804229295Z" level=info msg="Container daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:06.808669 containerd[1562]: time="2025-05-13T00:05:06.808575393Z" level=info msg="CreateContainer within sandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\"" May 13 00:05:06.809595 containerd[1562]: time="2025-05-13T00:05:06.809246380Z" level=info msg="StartContainer for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\"" May 13 00:05:06.810127 containerd[1562]: time="2025-05-13T00:05:06.810114418Z" level=info msg="connecting to shim daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113" address="unix:///run/containerd/s/e2867c5029877d487ca20a664c35eafe897b8cd3bb963277d821813bf1937da1" protocol=ttrpc version=3 May 13 00:05:06.829325 systemd[1]: Started cri-containerd-daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113.scope - libcontainer container daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113. May 13 00:05:06.862641 containerd[1562]: time="2025-05-13T00:05:06.862361721Z" level=info msg="StartContainer for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" returns successfully" May 13 00:05:07.116512 kubelet[2813]: I0513 00:05:07.115918 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-l6d6t" podStartSLOduration=1.430049603 podStartE2EDuration="4.115905326s" podCreationTimestamp="2025-05-13 00:05:03 +0000 UTC" firstStartedPulling="2025-05-13 00:05:04.109369933 +0000 UTC m=+7.225885243" lastFinishedPulling="2025-05-13 00:05:06.795225653 +0000 UTC m=+9.911740966" observedRunningTime="2025-05-13 00:05:07.115300045 +0000 UTC m=+10.231815355" watchObservedRunningTime="2025-05-13 00:05:07.115905326 +0000 UTC m=+10.232420635" May 13 00:05:12.267172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991735299.mount: Deactivated successfully. May 13 00:05:13.868947 containerd[1562]: time="2025-05-13T00:05:13.868827814Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:05:13.888262 containerd[1562]: time="2025-05-13T00:05:13.888218553Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 00:05:13.893368 containerd[1562]: time="2025-05-13T00:05:13.893328933Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:05:13.894214 containerd[1562]: time="2025-05-13T00:05:13.894115552Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.097889342s" May 13 00:05:13.894214 containerd[1562]: time="2025-05-13T00:05:13.894137985Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:05:13.902301 containerd[1562]: time="2025-05-13T00:05:13.902267659Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:05:14.031509 containerd[1562]: time="2025-05-13T00:05:14.031477593Z" level=info msg="Container 120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:14.035717 containerd[1562]: time="2025-05-13T00:05:14.035631073Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\"" May 13 00:05:14.036403 containerd[1562]: time="2025-05-13T00:05:14.036038446Z" level=info msg="StartContainer for \"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\"" May 13 00:05:14.039923 containerd[1562]: time="2025-05-13T00:05:14.039893251Z" level=info msg="connecting to shim 120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35" address="unix:///run/containerd/s/7264f4228233d45f4c8e6e6999c1c9d8e7cb0a16e49343eacd73ee4cd834248c" protocol=ttrpc version=3 May 13 00:05:14.211345 systemd[1]: Started cri-containerd-120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35.scope - libcontainer container 120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35. May 13 00:05:14.231048 containerd[1562]: time="2025-05-13T00:05:14.231014372Z" level=info msg="StartContainer for \"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" returns successfully" May 13 00:05:14.238709 systemd[1]: cri-containerd-120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35.scope: Deactivated successfully. May 13 00:05:14.298748 containerd[1562]: time="2025-05-13T00:05:14.298713573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" id:\"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" pid:3268 exited_at:{seconds:1747094714 nanos:279631230}" May 13 00:05:14.302887 containerd[1562]: time="2025-05-13T00:05:14.302861273Z" level=info msg="received exit event container_id:\"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" id:\"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" pid:3268 exited_at:{seconds:1747094714 nanos:279631230}" May 13 00:05:15.030620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35-rootfs.mount: Deactivated successfully. May 13 00:05:15.132600 containerd[1562]: time="2025-05-13T00:05:15.132467550Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:05:15.138899 containerd[1562]: time="2025-05-13T00:05:15.137205130Z" level=info msg="Container 88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:15.141820 containerd[1562]: time="2025-05-13T00:05:15.141789394Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\"" May 13 00:05:15.142123 containerd[1562]: time="2025-05-13T00:05:15.142108383Z" level=info msg="StartContainer for \"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\"" May 13 00:05:15.142840 containerd[1562]: time="2025-05-13T00:05:15.142800629Z" level=info msg="connecting to shim 88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8" address="unix:///run/containerd/s/7264f4228233d45f4c8e6e6999c1c9d8e7cb0a16e49343eacd73ee4cd834248c" protocol=ttrpc version=3 May 13 00:05:15.162289 systemd[1]: Started cri-containerd-88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8.scope - libcontainer container 88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8. May 13 00:05:15.183340 containerd[1562]: time="2025-05-13T00:05:15.183315760Z" level=info msg="StartContainer for \"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" returns successfully" May 13 00:05:15.194388 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:05:15.194555 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:05:15.194694 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 00:05:15.196422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:05:15.197763 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:05:15.199142 systemd[1]: cri-containerd-88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8.scope: Deactivated successfully. May 13 00:05:15.205455 containerd[1562]: time="2025-05-13T00:05:15.202364790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" id:\"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" pid:3313 exited_at:{seconds:1747094715 nanos:199696779}" May 13 00:05:15.205455 containerd[1562]: time="2025-05-13T00:05:15.202399253Z" level=info msg="received exit event container_id:\"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" id:\"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" pid:3313 exited_at:{seconds:1747094715 nanos:199696779}" May 13 00:05:15.213685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8-rootfs.mount: Deactivated successfully. May 13 00:05:15.284950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:05:16.136508 containerd[1562]: time="2025-05-13T00:05:16.136403836Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:05:16.146965 containerd[1562]: time="2025-05-13T00:05:16.146938193Z" level=info msg="Container 87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:16.156601 containerd[1562]: time="2025-05-13T00:05:16.156570270Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\"" May 13 00:05:16.157768 containerd[1562]: time="2025-05-13T00:05:16.156986194Z" level=info msg="StartContainer for \"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\"" May 13 00:05:16.157768 containerd[1562]: time="2025-05-13T00:05:16.157770967Z" level=info msg="connecting to shim 87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f" address="unix:///run/containerd/s/7264f4228233d45f4c8e6e6999c1c9d8e7cb0a16e49343eacd73ee4cd834248c" protocol=ttrpc version=3 May 13 00:05:16.174296 systemd[1]: Started cri-containerd-87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f.scope - libcontainer container 87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f. May 13 00:05:16.201288 systemd[1]: cri-containerd-87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f.scope: Deactivated successfully. May 13 00:05:16.202211 containerd[1562]: time="2025-05-13T00:05:16.202168155Z" level=info msg="received exit event container_id:\"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" id:\"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" pid:3358 exited_at:{seconds:1747094716 nanos:201670854}" May 13 00:05:16.211145 containerd[1562]: time="2025-05-13T00:05:16.210213120Z" level=info msg="StartContainer for \"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" returns successfully" May 13 00:05:16.215407 containerd[1562]: time="2025-05-13T00:05:16.215379295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" id:\"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" pid:3358 exited_at:{seconds:1747094716 nanos:201670854}" May 13 00:05:16.225178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f-rootfs.mount: Deactivated successfully. May 13 00:05:17.137846 containerd[1562]: time="2025-05-13T00:05:17.137280880Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:05:17.161043 containerd[1562]: time="2025-05-13T00:05:17.161021085Z" level=info msg="Container 23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:17.174238 containerd[1562]: time="2025-05-13T00:05:17.174201666Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\"" May 13 00:05:17.174661 containerd[1562]: time="2025-05-13T00:05:17.174647061Z" level=info msg="StartContainer for \"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\"" May 13 00:05:17.175347 containerd[1562]: time="2025-05-13T00:05:17.175305185Z" level=info msg="connecting to shim 23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b" address="unix:///run/containerd/s/7264f4228233d45f4c8e6e6999c1c9d8e7cb0a16e49343eacd73ee4cd834248c" protocol=ttrpc version=3 May 13 00:05:17.196342 systemd[1]: Started cri-containerd-23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b.scope - libcontainer container 23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b. May 13 00:05:17.232486 systemd[1]: cri-containerd-23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b.scope: Deactivated successfully. May 13 00:05:17.233029 containerd[1562]: time="2025-05-13T00:05:17.232881156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" id:\"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" pid:3396 exited_at:{seconds:1747094717 nanos:232365159}" May 13 00:05:17.233721 containerd[1562]: time="2025-05-13T00:05:17.233642354Z" level=info msg="received exit event container_id:\"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" id:\"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" pid:3396 exited_at:{seconds:1747094717 nanos:232365159}" May 13 00:05:17.239644 containerd[1562]: time="2025-05-13T00:05:17.239620734Z" level=info msg="StartContainer for \"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" returns successfully" May 13 00:05:17.249967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b-rootfs.mount: Deactivated successfully. May 13 00:05:18.142412 containerd[1562]: time="2025-05-13T00:05:18.142282780Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:05:18.160200 containerd[1562]: time="2025-05-13T00:05:18.159785948Z" level=info msg="Container 8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:18.163553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841603127.mount: Deactivated successfully. May 13 00:05:18.172762 containerd[1562]: time="2025-05-13T00:05:18.172738805Z" level=info msg="CreateContainer within sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\"" May 13 00:05:18.173651 containerd[1562]: time="2025-05-13T00:05:18.173500859Z" level=info msg="StartContainer for \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\"" May 13 00:05:18.174199 containerd[1562]: time="2025-05-13T00:05:18.174182204Z" level=info msg="connecting to shim 8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753" address="unix:///run/containerd/s/7264f4228233d45f4c8e6e6999c1c9d8e7cb0a16e49343eacd73ee4cd834248c" protocol=ttrpc version=3 May 13 00:05:18.187275 systemd[1]: Started cri-containerd-8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753.scope - libcontainer container 8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753. May 13 00:05:18.220761 containerd[1562]: time="2025-05-13T00:05:18.220736069Z" level=info msg="StartContainer for \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" returns successfully" May 13 00:05:18.304124 containerd[1562]: time="2025-05-13T00:05:18.303726874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" id:\"7b8671a97108b9f9d09795644f184a61fadc40f5e667cd7431951ab7dbeb3860\" pid:3469 exited_at:{seconds:1747094718 nanos:303461179}" May 13 00:05:18.412967 kubelet[2813]: I0513 00:05:18.412902 2813 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:05:18.439718 systemd[1]: Created slice kubepods-burstable-pod1b4cbc6d_9685_4783_b164_4af7b191da15.slice - libcontainer container kubepods-burstable-pod1b4cbc6d_9685_4783_b164_4af7b191da15.slice. May 13 00:05:18.445434 systemd[1]: Created slice kubepods-burstable-pod72812703_04c2_453e_9c0b_4eb96788daa6.slice - libcontainer container kubepods-burstable-pod72812703_04c2_453e_9c0b_4eb96788daa6.slice. May 13 00:05:18.591879 kubelet[2813]: I0513 00:05:18.591848 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5nv\" (UniqueName: \"kubernetes.io/projected/1b4cbc6d-9685-4783-b164-4af7b191da15-kube-api-access-st5nv\") pod \"coredns-6f6b679f8f-vgpkq\" (UID: \"1b4cbc6d-9685-4783-b164-4af7b191da15\") " pod="kube-system/coredns-6f6b679f8f-vgpkq" May 13 00:05:18.591879 kubelet[2813]: I0513 00:05:18.591880 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72812703-04c2-453e-9c0b-4eb96788daa6-config-volume\") pod \"coredns-6f6b679f8f-qz7n2\" (UID: \"72812703-04c2-453e-9c0b-4eb96788daa6\") " pod="kube-system/coredns-6f6b679f8f-qz7n2" May 13 00:05:18.592045 kubelet[2813]: I0513 00:05:18.591891 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b4cbc6d-9685-4783-b164-4af7b191da15-config-volume\") pod \"coredns-6f6b679f8f-vgpkq\" (UID: \"1b4cbc6d-9685-4783-b164-4af7b191da15\") " pod="kube-system/coredns-6f6b679f8f-vgpkq" May 13 00:05:18.592045 kubelet[2813]: I0513 00:05:18.591908 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkb9s\" (UniqueName: \"kubernetes.io/projected/72812703-04c2-453e-9c0b-4eb96788daa6-kube-api-access-vkb9s\") pod \"coredns-6f6b679f8f-qz7n2\" (UID: \"72812703-04c2-453e-9c0b-4eb96788daa6\") " pod="kube-system/coredns-6f6b679f8f-qz7n2" May 13 00:05:18.744222 containerd[1562]: time="2025-05-13T00:05:18.744089533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vgpkq,Uid:1b4cbc6d-9685-4783-b164-4af7b191da15,Namespace:kube-system,Attempt:0,}" May 13 00:05:18.752086 containerd[1562]: time="2025-05-13T00:05:18.751846205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qz7n2,Uid:72812703-04c2-453e-9c0b-4eb96788daa6,Namespace:kube-system,Attempt:0,}" May 13 00:05:19.157954 kubelet[2813]: I0513 00:05:19.157799 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x5fsv" podStartSLOduration=6.469657062 podStartE2EDuration="16.155890296s" podCreationTimestamp="2025-05-13 00:05:03 +0000 UTC" firstStartedPulling="2025-05-13 00:05:04.208388695 +0000 UTC m=+7.324904005" lastFinishedPulling="2025-05-13 00:05:13.894621928 +0000 UTC m=+17.011137239" observedRunningTime="2025-05-13 00:05:19.15581026 +0000 UTC m=+22.272325575" watchObservedRunningTime="2025-05-13 00:05:19.155890296 +0000 UTC m=+22.272405616" May 13 00:05:45.939791 systemd-networkd[1461]: cilium_host: Link UP May 13 00:05:45.940360 systemd-networkd[1461]: cilium_net: Link UP May 13 00:05:45.940546 systemd-networkd[1461]: cilium_net: Gained carrier May 13 00:05:45.940656 systemd-networkd[1461]: cilium_host: Gained carrier May 13 00:05:46.039245 systemd-networkd[1461]: cilium_vxlan: Link UP May 13 00:05:46.039334 systemd-networkd[1461]: cilium_vxlan: Gained carrier May 13 00:05:46.494207 kernel: NET: Registered PF_ALG protocol family May 13 00:05:46.668272 systemd-networkd[1461]: cilium_net: Gained IPv6LL May 13 00:05:46.860281 systemd-networkd[1461]: cilium_host: Gained IPv6LL May 13 00:05:47.058129 systemd-networkd[1461]: lxc_health: Link UP May 13 00:05:47.066735 systemd-networkd[1461]: lxc_health: Gained carrier May 13 00:05:47.329515 kernel: eth0: renamed from tmpedd75 May 13 00:05:47.335328 systemd-networkd[1461]: lxca9d45adeb7a7: Link UP May 13 00:05:47.335689 systemd-networkd[1461]: lxca9d45adeb7a7: Gained carrier May 13 00:05:47.337027 systemd-networkd[1461]: lxc1c3c36666d0b: Link UP May 13 00:05:47.342631 kernel: eth0: renamed from tmpe86fe May 13 00:05:47.347927 systemd-networkd[1461]: lxc1c3c36666d0b: Gained carrier May 13 00:05:47.948435 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL May 13 00:05:48.268306 systemd-networkd[1461]: lxc_health: Gained IPv6LL May 13 00:05:48.844273 systemd-networkd[1461]: lxca9d45adeb7a7: Gained IPv6LL May 13 00:05:48.909302 systemd-networkd[1461]: lxc1c3c36666d0b: Gained IPv6LL May 13 00:05:49.910569 containerd[1562]: time="2025-05-13T00:05:49.910523869Z" level=info msg="connecting to shim e86fe765c80cef5345a306a1466f1bc42640abb39a2a526ec2bd769d3e919121" address="unix:///run/containerd/s/efe41a99d635bf4f132a7ecdecff75d9e952ac3111981ea6c772c310885880c6" namespace=k8s.io protocol=ttrpc version=3 May 13 00:05:49.912748 containerd[1562]: time="2025-05-13T00:05:49.912698700Z" level=info msg="connecting to shim edd7523dce7d3cfb0b982f85b56079cb5e93a3f40c840011d6bcb041a8978a46" address="unix:///run/containerd/s/63ce2681f316b465f7dc505ddd97e5f75da5273fe13a783ab3d17bc59b095c00" namespace=k8s.io protocol=ttrpc version=3 May 13 00:05:49.944506 systemd[1]: Started cri-containerd-edd7523dce7d3cfb0b982f85b56079cb5e93a3f40c840011d6bcb041a8978a46.scope - libcontainer container edd7523dce7d3cfb0b982f85b56079cb5e93a3f40c840011d6bcb041a8978a46. May 13 00:05:49.948643 systemd[1]: Started cri-containerd-e86fe765c80cef5345a306a1466f1bc42640abb39a2a526ec2bd769d3e919121.scope - libcontainer container e86fe765c80cef5345a306a1466f1bc42640abb39a2a526ec2bd769d3e919121. May 13 00:05:49.967336 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:05:49.973807 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:05:50.020256 containerd[1562]: time="2025-05-13T00:05:50.020230528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vgpkq,Uid:1b4cbc6d-9685-4783-b164-4af7b191da15,Namespace:kube-system,Attempt:0,} returns sandbox id \"e86fe765c80cef5345a306a1466f1bc42640abb39a2a526ec2bd769d3e919121\"" May 13 00:05:50.020880 containerd[1562]: time="2025-05-13T00:05:50.020863603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qz7n2,Uid:72812703-04c2-453e-9c0b-4eb96788daa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"edd7523dce7d3cfb0b982f85b56079cb5e93a3f40c840011d6bcb041a8978a46\"" May 13 00:05:50.026245 containerd[1562]: time="2025-05-13T00:05:50.026228056Z" level=info msg="CreateContainer within sandbox \"edd7523dce7d3cfb0b982f85b56079cb5e93a3f40c840011d6bcb041a8978a46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:05:50.026353 containerd[1562]: time="2025-05-13T00:05:50.026324696Z" level=info msg="CreateContainer within sandbox \"e86fe765c80cef5345a306a1466f1bc42640abb39a2a526ec2bd769d3e919121\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:05:50.040015 containerd[1562]: time="2025-05-13T00:05:50.039979832Z" level=info msg="Container 31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:50.040772 containerd[1562]: time="2025-05-13T00:05:50.040211309Z" level=info msg="Container caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc: CDI devices from CRI Config.CDIDevices: []" May 13 00:05:50.043102 containerd[1562]: time="2025-05-13T00:05:50.043085893Z" level=info msg="CreateContainer within sandbox \"e86fe765c80cef5345a306a1466f1bc42640abb39a2a526ec2bd769d3e919121\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc\"" May 13 00:05:50.044230 containerd[1562]: time="2025-05-13T00:05:50.044160132Z" level=info msg="StartContainer for \"caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc\"" May 13 00:05:50.044841 containerd[1562]: time="2025-05-13T00:05:50.044721859Z" level=info msg="CreateContainer within sandbox \"edd7523dce7d3cfb0b982f85b56079cb5e93a3f40c840011d6bcb041a8978a46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12\"" May 13 00:05:50.045255 containerd[1562]: time="2025-05-13T00:05:50.045216547Z" level=info msg="StartContainer for \"31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12\"" May 13 00:05:50.045420 containerd[1562]: time="2025-05-13T00:05:50.045310145Z" level=info msg="connecting to shim caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc" address="unix:///run/containerd/s/efe41a99d635bf4f132a7ecdecff75d9e952ac3111981ea6c772c310885880c6" protocol=ttrpc version=3 May 13 00:05:50.046519 containerd[1562]: time="2025-05-13T00:05:50.046455383Z" level=info msg="connecting to shim 31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12" address="unix:///run/containerd/s/63ce2681f316b465f7dc505ddd97e5f75da5273fe13a783ab3d17bc59b095c00" protocol=ttrpc version=3 May 13 00:05:50.059296 systemd[1]: Started cri-containerd-caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc.scope - libcontainer container caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc. May 13 00:05:50.062276 systemd[1]: Started cri-containerd-31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12.scope - libcontainer container 31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12. May 13 00:05:50.088932 containerd[1562]: time="2025-05-13T00:05:50.088907515Z" level=info msg="StartContainer for \"31fc5c724b85240b1545b14cc6e46a7434d58ce96bafdb50ad20b75e96d84a12\" returns successfully" May 13 00:05:50.094080 containerd[1562]: time="2025-05-13T00:05:50.094054972Z" level=info msg="StartContainer for \"caeaa698d667de787a6c506c10ef2790ea901e801cd414ca180e7c0c43a5b1dc\" returns successfully" May 13 00:05:50.196679 kubelet[2813]: I0513 00:05:50.196180 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qz7n2" podStartSLOduration=47.196166068 podStartE2EDuration="47.196166068s" podCreationTimestamp="2025-05-13 00:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:05:50.195433123 +0000 UTC m=+53.311948440" watchObservedRunningTime="2025-05-13 00:05:50.196166068 +0000 UTC m=+53.312681381" May 13 00:05:50.878370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021879010.mount: Deactivated successfully. May 13 00:05:51.242118 kubelet[2813]: I0513 00:05:51.242079 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vgpkq" podStartSLOduration=48.242067616 podStartE2EDuration="48.242067616s" podCreationTimestamp="2025-05-13 00:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:05:50.21974149 +0000 UTC m=+53.336256809" watchObservedRunningTime="2025-05-13 00:05:51.242067616 +0000 UTC m=+54.358582934" May 13 00:06:00.817495 systemd[1]: Started sshd@7-139.178.70.105:22-147.75.109.163:47328.service - OpenSSH per-connection server daemon (147.75.109.163:47328). May 13 00:06:00.903033 sshd[4125]: Accepted publickey for core from 147.75.109.163 port 47328 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:00.910265 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:00.922278 systemd-logind[1534]: New session 10 of user core. May 13 00:06:00.924313 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:06:01.825224 sshd[4127]: Connection closed by 147.75.109.163 port 47328 May 13 00:06:01.825382 sshd-session[4125]: pam_unix(sshd:session): session closed for user core May 13 00:06:01.827520 systemd[1]: sshd@7-139.178.70.105:22-147.75.109.163:47328.service: Deactivated successfully. May 13 00:06:01.829173 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:06:01.830577 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. May 13 00:06:01.831405 systemd-logind[1534]: Removed session 10. May 13 00:06:06.838271 systemd[1]: Started sshd@8-139.178.70.105:22-147.75.109.163:47338.service - OpenSSH per-connection server daemon (147.75.109.163:47338). May 13 00:06:06.906781 sshd[4143]: Accepted publickey for core from 147.75.109.163 port 47338 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:06.907681 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:06.912075 systemd-logind[1534]: New session 11 of user core. May 13 00:06:06.921327 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:06:07.057262 sshd[4145]: Connection closed by 147.75.109.163 port 47338 May 13 00:06:07.057739 sshd-session[4143]: pam_unix(sshd:session): session closed for user core May 13 00:06:07.060708 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. May 13 00:06:07.060806 systemd[1]: sshd@8-139.178.70.105:22-147.75.109.163:47338.service: Deactivated successfully. May 13 00:06:07.064287 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:06:07.066013 systemd-logind[1534]: Removed session 11. May 13 00:06:12.068402 systemd[1]: Started sshd@9-139.178.70.105:22-147.75.109.163:53166.service - OpenSSH per-connection server daemon (147.75.109.163:53166). May 13 00:06:12.145122 sshd[4157]: Accepted publickey for core from 147.75.109.163 port 53166 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:12.146254 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:12.150261 systemd-logind[1534]: New session 12 of user core. May 13 00:06:12.155466 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:06:12.255279 sshd[4159]: Connection closed by 147.75.109.163 port 53166 May 13 00:06:12.255649 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 13 00:06:12.257814 systemd[1]: sshd@9-139.178.70.105:22-147.75.109.163:53166.service: Deactivated successfully. May 13 00:06:12.258915 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:06:12.259450 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. May 13 00:06:12.260029 systemd-logind[1534]: Removed session 12. May 13 00:06:17.275328 systemd[1]: Started sshd@10-139.178.70.105:22-147.75.109.163:53172.service - OpenSSH per-connection server daemon (147.75.109.163:53172). May 13 00:06:17.316495 sshd[4171]: Accepted publickey for core from 147.75.109.163 port 53172 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:17.317511 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:17.320930 systemd-logind[1534]: New session 13 of user core. May 13 00:06:17.329322 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:06:17.434437 sshd[4173]: Connection closed by 147.75.109.163 port 53172 May 13 00:06:17.435437 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 13 00:06:17.445154 systemd[1]: sshd@10-139.178.70.105:22-147.75.109.163:53172.service: Deactivated successfully. May 13 00:06:17.446182 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:06:17.446706 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. May 13 00:06:17.448216 systemd[1]: Started sshd@11-139.178.70.105:22-147.75.109.163:53180.service - OpenSSH per-connection server daemon (147.75.109.163:53180). May 13 00:06:17.449550 systemd-logind[1534]: Removed session 13. May 13 00:06:17.485225 sshd[4185]: Accepted publickey for core from 147.75.109.163 port 53180 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:17.486054 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:17.488945 systemd-logind[1534]: New session 14 of user core. May 13 00:06:17.500304 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:06:17.630218 sshd[4188]: Connection closed by 147.75.109.163 port 53180 May 13 00:06:17.630779 sshd-session[4185]: pam_unix(sshd:session): session closed for user core May 13 00:06:17.640348 systemd[1]: sshd@11-139.178.70.105:22-147.75.109.163:53180.service: Deactivated successfully. May 13 00:06:17.642398 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:06:17.645674 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. May 13 00:06:17.648793 systemd[1]: Started sshd@12-139.178.70.105:22-147.75.109.163:53184.service - OpenSSH per-connection server daemon (147.75.109.163:53184). May 13 00:06:17.655411 systemd-logind[1534]: Removed session 14. May 13 00:06:17.707287 sshd[4197]: Accepted publickey for core from 147.75.109.163 port 53184 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:17.708254 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:17.711075 systemd-logind[1534]: New session 15 of user core. May 13 00:06:17.715366 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:06:17.819214 sshd[4200]: Connection closed by 147.75.109.163 port 53184 May 13 00:06:17.819669 sshd-session[4197]: pam_unix(sshd:session): session closed for user core May 13 00:06:17.822360 systemd[1]: sshd@12-139.178.70.105:22-147.75.109.163:53184.service: Deactivated successfully. May 13 00:06:17.823660 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:06:17.824152 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. May 13 00:06:17.824786 systemd-logind[1534]: Removed session 15. May 13 00:06:22.836357 systemd[1]: Started sshd@13-139.178.70.105:22-147.75.109.163:38378.service - OpenSSH per-connection server daemon (147.75.109.163:38378). May 13 00:06:22.877585 sshd[4213]: Accepted publickey for core from 147.75.109.163 port 38378 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:22.878461 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:22.881890 systemd-logind[1534]: New session 16 of user core. May 13 00:06:22.887357 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:06:22.977677 sshd[4215]: Connection closed by 147.75.109.163 port 38378 May 13 00:06:22.977585 sshd-session[4213]: pam_unix(sshd:session): session closed for user core May 13 00:06:22.979603 systemd[1]: sshd@13-139.178.70.105:22-147.75.109.163:38378.service: Deactivated successfully. May 13 00:06:22.981264 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:06:22.981297 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. May 13 00:06:22.982072 systemd-logind[1534]: Removed session 16. May 13 00:06:27.987583 systemd[1]: Started sshd@14-139.178.70.105:22-147.75.109.163:60862.service - OpenSSH per-connection server daemon (147.75.109.163:60862). May 13 00:06:28.029620 sshd[4227]: Accepted publickey for core from 147.75.109.163 port 60862 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:28.030559 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:28.034865 systemd-logind[1534]: New session 17 of user core. May 13 00:06:28.042337 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:06:28.146129 sshd[4229]: Connection closed by 147.75.109.163 port 60862 May 13 00:06:28.147516 sshd-session[4227]: pam_unix(sshd:session): session closed for user core May 13 00:06:28.153741 systemd[1]: sshd@14-139.178.70.105:22-147.75.109.163:60862.service: Deactivated successfully. May 13 00:06:28.155334 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:06:28.155960 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. May 13 00:06:28.158031 systemd[1]: Started sshd@15-139.178.70.105:22-147.75.109.163:60868.service - OpenSSH per-connection server daemon (147.75.109.163:60868). May 13 00:06:28.158831 systemd-logind[1534]: Removed session 17. May 13 00:06:28.208111 sshd[4240]: Accepted publickey for core from 147.75.109.163 port 60868 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:28.208921 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:28.212103 systemd-logind[1534]: New session 18 of user core. May 13 00:06:28.222283 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:06:28.626237 sshd[4243]: Connection closed by 147.75.109.163 port 60868 May 13 00:06:28.626996 sshd-session[4240]: pam_unix(sshd:session): session closed for user core May 13 00:06:28.634331 systemd[1]: Started sshd@16-139.178.70.105:22-147.75.109.163:60874.service - OpenSSH per-connection server daemon (147.75.109.163:60874). May 13 00:06:28.634673 systemd[1]: sshd@15-139.178.70.105:22-147.75.109.163:60868.service: Deactivated successfully. May 13 00:06:28.635730 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:06:28.637483 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. May 13 00:06:28.638144 systemd-logind[1534]: Removed session 18. May 13 00:06:28.676470 sshd[4250]: Accepted publickey for core from 147.75.109.163 port 60874 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:28.677485 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:28.681645 systemd-logind[1534]: New session 19 of user core. May 13 00:06:28.690282 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:06:30.387478 sshd[4255]: Connection closed by 147.75.109.163 port 60874 May 13 00:06:30.387779 sshd-session[4250]: pam_unix(sshd:session): session closed for user core May 13 00:06:30.398693 systemd[1]: Started sshd@17-139.178.70.105:22-147.75.109.163:60878.service - OpenSSH per-connection server daemon (147.75.109.163:60878). May 13 00:06:30.399614 systemd[1]: sshd@16-139.178.70.105:22-147.75.109.163:60874.service: Deactivated successfully. May 13 00:06:30.400833 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:06:30.404317 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. May 13 00:06:30.406123 systemd-logind[1534]: Removed session 19. May 13 00:06:30.464397 sshd[4270]: Accepted publickey for core from 147.75.109.163 port 60878 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:30.465479 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:30.468711 systemd-logind[1534]: New session 20 of user core. May 13 00:06:30.476363 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:06:30.672745 sshd[4275]: Connection closed by 147.75.109.163 port 60878 May 13 00:06:30.673179 sshd-session[4270]: pam_unix(sshd:session): session closed for user core May 13 00:06:30.683366 systemd[1]: sshd@17-139.178.70.105:22-147.75.109.163:60878.service: Deactivated successfully. May 13 00:06:30.684873 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:06:30.685856 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. May 13 00:06:30.687818 systemd[1]: Started sshd@18-139.178.70.105:22-147.75.109.163:60884.service - OpenSSH per-connection server daemon (147.75.109.163:60884). May 13 00:06:30.688726 systemd-logind[1534]: Removed session 20. May 13 00:06:30.729215 sshd[4284]: Accepted publickey for core from 147.75.109.163 port 60884 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:30.730437 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:30.734112 systemd-logind[1534]: New session 21 of user core. May 13 00:06:30.737360 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:06:30.852617 sshd[4287]: Connection closed by 147.75.109.163 port 60884 May 13 00:06:30.855121 systemd[1]: sshd@18-139.178.70.105:22-147.75.109.163:60884.service: Deactivated successfully. May 13 00:06:30.853144 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 13 00:06:30.856846 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:06:30.857975 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. May 13 00:06:30.858687 systemd-logind[1534]: Removed session 21. May 13 00:06:35.865511 systemd[1]: Started sshd@19-139.178.70.105:22-147.75.109.163:60898.service - OpenSSH per-connection server daemon (147.75.109.163:60898). May 13 00:06:35.947181 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 60898 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:35.948019 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:35.950834 systemd-logind[1534]: New session 22 of user core. May 13 00:06:35.957301 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:06:36.195037 sshd[4302]: Connection closed by 147.75.109.163 port 60898 May 13 00:06:36.199374 sshd-session[4300]: pam_unix(sshd:session): session closed for user core May 13 00:06:36.201420 systemd[1]: sshd@19-139.178.70.105:22-147.75.109.163:60898.service: Deactivated successfully. May 13 00:06:36.202679 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:06:36.203266 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. May 13 00:06:36.203830 systemd-logind[1534]: Removed session 22. May 13 00:06:41.204534 systemd[1]: Started sshd@20-139.178.70.105:22-147.75.109.163:44522.service - OpenSSH per-connection server daemon (147.75.109.163:44522). May 13 00:06:41.249738 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 44522 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:41.250563 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:41.254224 systemd-logind[1534]: New session 23 of user core. May 13 00:06:41.260359 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:06:41.353214 sshd[4321]: Connection closed by 147.75.109.163 port 44522 May 13 00:06:41.353759 sshd-session[4319]: pam_unix(sshd:session): session closed for user core May 13 00:06:41.355702 systemd[1]: sshd@20-139.178.70.105:22-147.75.109.163:44522.service: Deactivated successfully. May 13 00:06:41.356747 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:06:41.357230 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. May 13 00:06:41.357868 systemd-logind[1534]: Removed session 23. May 13 00:06:46.366202 systemd[1]: Started sshd@21-139.178.70.105:22-147.75.109.163:44534.service - OpenSSH per-connection server daemon (147.75.109.163:44534). May 13 00:06:46.420632 sshd[4333]: Accepted publickey for core from 147.75.109.163 port 44534 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:46.421684 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:46.426123 systemd-logind[1534]: New session 24 of user core. May 13 00:06:46.434288 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:06:46.538383 sshd[4335]: Connection closed by 147.75.109.163 port 44534 May 13 00:06:46.539595 sshd-session[4333]: pam_unix(sshd:session): session closed for user core May 13 00:06:46.541639 systemd[1]: sshd@21-139.178.70.105:22-147.75.109.163:44534.service: Deactivated successfully. May 13 00:06:46.542658 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:06:46.543086 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. May 13 00:06:46.543902 systemd-logind[1534]: Removed session 24. May 13 00:06:51.549165 systemd[1]: Started sshd@22-139.178.70.105:22-147.75.109.163:35754.service - OpenSSH per-connection server daemon (147.75.109.163:35754). May 13 00:06:51.593730 sshd[4348]: Accepted publickey for core from 147.75.109.163 port 35754 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:51.594603 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:51.598175 systemd-logind[1534]: New session 25 of user core. May 13 00:06:51.601314 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:06:51.686270 sshd[4350]: Connection closed by 147.75.109.163 port 35754 May 13 00:06:51.687161 sshd-session[4348]: pam_unix(sshd:session): session closed for user core May 13 00:06:51.693547 systemd[1]: sshd@22-139.178.70.105:22-147.75.109.163:35754.service: Deactivated successfully. May 13 00:06:51.694566 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:06:51.695414 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. May 13 00:06:51.696151 systemd[1]: Started sshd@23-139.178.70.105:22-147.75.109.163:35762.service - OpenSSH per-connection server daemon (147.75.109.163:35762). May 13 00:06:51.697561 systemd-logind[1534]: Removed session 25. May 13 00:06:51.740620 sshd[4360]: Accepted publickey for core from 147.75.109.163 port 35762 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:51.741391 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:51.744826 systemd-logind[1534]: New session 26 of user core. May 13 00:06:51.748287 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 00:06:53.208664 containerd[1562]: time="2025-05-13T00:06:53.208572778Z" level=info msg="StopContainer for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" with timeout 30 (s)" May 13 00:06:53.211991 containerd[1562]: time="2025-05-13T00:06:53.211425965Z" level=info msg="Stop container \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" with signal terminated" May 13 00:06:53.231963 systemd[1]: cri-containerd-daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113.scope: Deactivated successfully. May 13 00:06:53.232172 systemd[1]: cri-containerd-daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113.scope: Consumed 203ms CPU time, 29M memory peak, 4.3M read from disk, 4K written to disk. May 13 00:06:53.233590 containerd[1562]: time="2025-05-13T00:06:53.233474549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" id:\"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" pid:3137 exited_at:{seconds:1747094813 nanos:231795191}" May 13 00:06:53.233590 containerd[1562]: time="2025-05-13T00:06:53.233520310Z" level=info msg="received exit event container_id:\"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" id:\"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" pid:3137 exited_at:{seconds:1747094813 nanos:231795191}" May 13 00:06:53.251389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113-rootfs.mount: Deactivated successfully. May 13 00:06:53.251876 containerd[1562]: time="2025-05-13T00:06:53.251573414Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:06:53.255503 containerd[1562]: time="2025-05-13T00:06:53.255474686Z" level=info msg="StopContainer for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" returns successfully" May 13 00:06:53.255966 containerd[1562]: time="2025-05-13T00:06:53.255929132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" id:\"d3a716a05ae8e9902e4277ca23ed7ca28f13aadc42be84cec9d25cf5e1c3dfe0\" pid:4388 exited_at:{seconds:1747094813 nanos:255755755}" May 13 00:06:53.263239 containerd[1562]: time="2025-05-13T00:06:53.263173947Z" level=info msg="StopContainer for \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" with timeout 2 (s)" May 13 00:06:53.263390 containerd[1562]: time="2025-05-13T00:06:53.263253622Z" level=info msg="StopPodSandbox for \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\"" May 13 00:06:53.263421 containerd[1562]: time="2025-05-13T00:06:53.263416207Z" level=info msg="Container to stop \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:06:53.263692 containerd[1562]: time="2025-05-13T00:06:53.263670998Z" level=info msg="Stop container \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" with signal terminated" May 13 00:06:53.269668 systemd-networkd[1461]: lxc_health: Link DOWN May 13 00:06:53.269673 systemd-networkd[1461]: lxc_health: Lost carrier May 13 00:06:53.271824 systemd[1]: cri-containerd-daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603.scope: Deactivated successfully. May 13 00:06:53.276918 containerd[1562]: time="2025-05-13T00:06:53.276892266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" id:\"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" pid:2919 exit_status:137 exited_at:{seconds:1747094813 nanos:276295093}" May 13 00:06:53.285239 systemd[1]: cri-containerd-8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753.scope: Deactivated successfully. May 13 00:06:53.285696 systemd[1]: cri-containerd-8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753.scope: Consumed 4.400s CPU time, 188.8M memory peak, 66.4M read from disk, 13.3M written to disk. May 13 00:06:53.295209 containerd[1562]: time="2025-05-13T00:06:53.285704501Z" level=info msg="received exit event container_id:\"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" id:\"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" pid:3433 exited_at:{seconds:1747094813 nanos:285435579}" May 13 00:06:53.301366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603-rootfs.mount: Deactivated successfully. May 13 00:06:53.304776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753-rootfs.mount: Deactivated successfully. May 13 00:06:53.320630 containerd[1562]: time="2025-05-13T00:06:53.320419376Z" level=info msg="shim disconnected" id=daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603 namespace=k8s.io May 13 00:06:53.320630 containerd[1562]: time="2025-05-13T00:06:53.320437045Z" level=warning msg="cleaning up after shim disconnected" id=daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603 namespace=k8s.io May 13 00:06:53.323513 containerd[1562]: time="2025-05-13T00:06:53.320441677Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:06:53.327770 containerd[1562]: time="2025-05-13T00:06:53.327676060Z" level=info msg="StopContainer for \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" returns successfully" May 13 00:06:53.328227 containerd[1562]: time="2025-05-13T00:06:53.328057422Z" level=info msg="StopPodSandbox for \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\"" May 13 00:06:53.328227 containerd[1562]: time="2025-05-13T00:06:53.328098917Z" level=info msg="Container to stop \"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:06:53.328227 containerd[1562]: time="2025-05-13T00:06:53.328107363Z" level=info msg="Container to stop \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:06:53.328227 containerd[1562]: time="2025-05-13T00:06:53.328112973Z" level=info msg="Container to stop \"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:06:53.328227 containerd[1562]: time="2025-05-13T00:06:53.328117803Z" level=info msg="Container to stop \"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:06:53.328227 containerd[1562]: time="2025-05-13T00:06:53.328122530Z" level=info msg="Container to stop \"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:06:53.333619 containerd[1562]: time="2025-05-13T00:06:53.333591909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" id:\"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" pid:3433 exited_at:{seconds:1747094813 nanos:285435579}" May 13 00:06:53.334810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603-shm.mount: Deactivated successfully. May 13 00:06:53.335929 systemd[1]: cri-containerd-858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae.scope: Deactivated successfully. May 13 00:06:53.336257 containerd[1562]: time="2025-05-13T00:06:53.336204707Z" level=info msg="TearDown network for sandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" successfully" May 13 00:06:53.336257 containerd[1562]: time="2025-05-13T00:06:53.336221368Z" level=info msg="StopPodSandbox for \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" returns successfully" May 13 00:06:53.336434 containerd[1562]: time="2025-05-13T00:06:53.336341552Z" level=info msg="received exit event sandbox_id:\"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" exit_status:137 exited_at:{seconds:1747094813 nanos:276295093}" May 13 00:06:53.338630 containerd[1562]: time="2025-05-13T00:06:53.338044784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" id:\"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" pid:2962 exit_status:137 exited_at:{seconds:1747094813 nanos:337568181}" May 13 00:06:53.355892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae-rootfs.mount: Deactivated successfully. May 13 00:06:53.377115 containerd[1562]: time="2025-05-13T00:06:53.376965273Z" level=info msg="shim disconnected" id=858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae namespace=k8s.io May 13 00:06:53.377115 containerd[1562]: time="2025-05-13T00:06:53.376990500Z" level=warning msg="cleaning up after shim disconnected" id=858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae namespace=k8s.io May 13 00:06:53.377115 containerd[1562]: time="2025-05-13T00:06:53.376998027Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:06:53.386317 containerd[1562]: time="2025-05-13T00:06:53.385729330Z" level=info msg="TearDown network for sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" successfully" May 13 00:06:53.386317 containerd[1562]: time="2025-05-13T00:06:53.385748090Z" level=info msg="StopPodSandbox for \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" returns successfully" May 13 00:06:53.386317 containerd[1562]: time="2025-05-13T00:06:53.386038883Z" level=info msg="received exit event sandbox_id:\"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" exit_status:137 exited_at:{seconds:1747094813 nanos:337568181}" May 13 00:06:53.396384 kubelet[2813]: I0513 00:06:53.396088 2813 scope.go:117] "RemoveContainer" containerID="daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113" May 13 00:06:53.397590 containerd[1562]: time="2025-05-13T00:06:53.397573393Z" level=info msg="RemoveContainer for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\"" May 13 00:06:53.401171 kubelet[2813]: I0513 00:06:53.400796 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgngz\" (UniqueName: \"kubernetes.io/projected/f6e0cc24-be47-41ce-8626-094c2ee3597d-kube-api-access-hgngz\") pod \"f6e0cc24-be47-41ce-8626-094c2ee3597d\" (UID: \"f6e0cc24-be47-41ce-8626-094c2ee3597d\") " May 13 00:06:53.401171 kubelet[2813]: I0513 00:06:53.401054 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6e0cc24-be47-41ce-8626-094c2ee3597d-cilium-config-path\") pod \"f6e0cc24-be47-41ce-8626-094c2ee3597d\" (UID: \"f6e0cc24-be47-41ce-8626-094c2ee3597d\") " May 13 00:06:53.407874 kubelet[2813]: I0513 00:06:53.406389 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e0cc24-be47-41ce-8626-094c2ee3597d-kube-api-access-hgngz" (OuterVolumeSpecName: "kube-api-access-hgngz") pod "f6e0cc24-be47-41ce-8626-094c2ee3597d" (UID: "f6e0cc24-be47-41ce-8626-094c2ee3597d"). InnerVolumeSpecName "kube-api-access-hgngz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:06:53.407874 kubelet[2813]: I0513 00:06:53.407824 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6e0cc24-be47-41ce-8626-094c2ee3597d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6e0cc24-be47-41ce-8626-094c2ee3597d" (UID: "f6e0cc24-be47-41ce-8626-094c2ee3597d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:06:53.412541 containerd[1562]: time="2025-05-13T00:06:53.412437724Z" level=info msg="RemoveContainer for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" returns successfully" May 13 00:06:53.412710 kubelet[2813]: I0513 00:06:53.412699 2813 scope.go:117] "RemoveContainer" containerID="daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113" May 13 00:06:53.415851 containerd[1562]: time="2025-05-13T00:06:53.412868572Z" level=error msg="ContainerStatus for \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\": not found" May 13 00:06:53.422872 kubelet[2813]: E0513 00:06:53.422681 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\": not found" containerID="daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113" May 13 00:06:53.422872 kubelet[2813]: I0513 00:06:53.422748 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113"} err="failed to get container status \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\": rpc error: code = NotFound desc = an error occurred when try to find container \"daf49648cb9a882e3cae48bc81be95cd000852755139ccb481554b2972688113\": not found" May 13 00:06:53.502041 kubelet[2813]: I0513 00:06:53.502010 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-kernel\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502041 kubelet[2813]: I0513 00:06:53.502040 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-bpf-maps\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502218 kubelet[2813]: I0513 00:06:53.502052 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-lib-modules\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502218 kubelet[2813]: I0513 00:06:53.502069 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-config-path\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502218 kubelet[2813]: I0513 00:06:53.502080 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cni-path\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502218 kubelet[2813]: I0513 00:06:53.502092 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/251c62c6-ea0d-41f0-b167-04c80856640e-clustermesh-secrets\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502218 kubelet[2813]: I0513 00:06:53.502102 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-run\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502218 kubelet[2813]: I0513 00:06:53.502113 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-cgroup\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502382 kubelet[2813]: I0513 00:06:53.502137 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-hubble-tls\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502382 kubelet[2813]: I0513 00:06:53.502152 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-net\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502382 kubelet[2813]: I0513 00:06:53.502163 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-etc-cni-netd\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502382 kubelet[2813]: I0513 00:06:53.502175 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-xtables-lock\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502382 kubelet[2813]: I0513 00:06:53.502221 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65smt\" (UniqueName: \"kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-kube-api-access-65smt\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.502382 kubelet[2813]: I0513 00:06:53.502238 2813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-hostproc\") pod \"251c62c6-ea0d-41f0-b167-04c80856640e\" (UID: \"251c62c6-ea0d-41f0-b167-04c80856640e\") " May 13 00:06:53.503222 kubelet[2813]: I0513 00:06:53.502558 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.503222 kubelet[2813]: I0513 00:06:53.502598 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.503222 kubelet[2813]: I0513 00:06:53.502613 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.503222 kubelet[2813]: I0513 00:06:53.502624 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.504080 kubelet[2813]: I0513 00:06:53.504058 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:06:53.504120 kubelet[2813]: I0513 00:06:53.504084 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cni-path" (OuterVolumeSpecName: "cni-path") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.506917 kubelet[2813]: I0513 00:06:53.505986 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/251c62c6-ea0d-41f0-b167-04c80856640e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:06:53.506917 kubelet[2813]: I0513 00:06:53.506014 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.506917 kubelet[2813]: I0513 00:06:53.506037 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.506917 kubelet[2813]: I0513 00:06:53.506666 2813 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6e0cc24-be47-41ce-8626-094c2ee3597d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.506917 kubelet[2813]: I0513 00:06:53.506682 2813 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hgngz\" (UniqueName: \"kubernetes.io/projected/f6e0cc24-be47-41ce-8626-094c2ee3597d-kube-api-access-hgngz\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.507066 kubelet[2813]: I0513 00:06:53.506705 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-hostproc" (OuterVolumeSpecName: "hostproc") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.507066 kubelet[2813]: I0513 00:06:53.506723 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.507066 kubelet[2813]: I0513 00:06:53.506735 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:06:53.507784 kubelet[2813]: I0513 00:06:53.507763 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:06:53.508759 kubelet[2813]: I0513 00:06:53.508742 2813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-kube-api-access-65smt" (OuterVolumeSpecName: "kube-api-access-65smt") pod "251c62c6-ea0d-41f0-b167-04c80856640e" (UID: "251c62c6-ea0d-41f0-b167-04c80856640e"). InnerVolumeSpecName "kube-api-access-65smt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:06:53.606850 kubelet[2813]: I0513 00:06:53.606814 2813 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.606850 kubelet[2813]: I0513 00:06:53.606842 2813 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.606850 kubelet[2813]: I0513 00:06:53.606848 2813 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.606850 kubelet[2813]: I0513 00:06:53.606854 2813 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.606850 kubelet[2813]: I0513 00:06:53.606859 2813 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606863 2813 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-65smt\" (UniqueName: \"kubernetes.io/projected/251c62c6-ea0d-41f0-b167-04c80856640e-kube-api-access-65smt\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606869 2813 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606873 2813 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606877 2813 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606881 2813 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606885 2813 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606889 2813 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607021 kubelet[2813]: I0513 00:06:53.606894 2813 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/251c62c6-ea0d-41f0-b167-04c80856640e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.607147 kubelet[2813]: I0513 00:06:53.606898 2813 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/251c62c6-ea0d-41f0-b167-04c80856640e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:06:53.673447 systemd[1]: Removed slice kubepods-besteffort-podf6e0cc24_be47_41ce_8626_094c2ee3597d.slice - libcontainer container kubepods-besteffort-podf6e0cc24_be47_41ce_8626_094c2ee3597d.slice. May 13 00:06:53.673512 systemd[1]: kubepods-besteffort-podf6e0cc24_be47_41ce_8626_094c2ee3597d.slice: Consumed 225ms CPU time, 29.7M memory peak, 4.3M read from disk, 4K written to disk. May 13 00:06:54.251394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae-shm.mount: Deactivated successfully. May 13 00:06:54.251473 systemd[1]: var-lib-kubelet-pods-f6e0cc24\x2dbe47\x2d41ce\x2d8626\x2d094c2ee3597d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgngz.mount: Deactivated successfully. May 13 00:06:54.251520 systemd[1]: var-lib-kubelet-pods-251c62c6\x2dea0d\x2d41f0\x2db167\x2d04c80856640e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:06:54.251564 systemd[1]: var-lib-kubelet-pods-251c62c6\x2dea0d\x2d41f0\x2db167\x2d04c80856640e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65smt.mount: Deactivated successfully. May 13 00:06:54.251604 systemd[1]: var-lib-kubelet-pods-251c62c6\x2dea0d\x2d41f0\x2db167\x2d04c80856640e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:06:54.375685 kubelet[2813]: I0513 00:06:54.375230 2813 scope.go:117] "RemoveContainer" containerID="8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753" May 13 00:06:54.378558 systemd[1]: Removed slice kubepods-burstable-pod251c62c6_ea0d_41f0_b167_04c80856640e.slice - libcontainer container kubepods-burstable-pod251c62c6_ea0d_41f0_b167_04c80856640e.slice. May 13 00:06:54.378617 systemd[1]: kubepods-burstable-pod251c62c6_ea0d_41f0_b167_04c80856640e.slice: Consumed 4.458s CPU time, 189.7M memory peak, 66.4M read from disk, 13.3M written to disk. May 13 00:06:54.385640 containerd[1562]: time="2025-05-13T00:06:54.385617301Z" level=info msg="RemoveContainer for \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\"" May 13 00:06:54.388559 containerd[1562]: time="2025-05-13T00:06:54.388536801Z" level=info msg="RemoveContainer for \"8641dd43e0c787d6eda9a20263d95ef60b178051544c4ed509cb3a59a1191753\" returns successfully" May 13 00:06:54.389227 kubelet[2813]: I0513 00:06:54.388776 2813 scope.go:117] "RemoveContainer" containerID="23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b" May 13 00:06:54.390416 containerd[1562]: time="2025-05-13T00:06:54.390373708Z" level=info msg="RemoveContainer for \"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\"" May 13 00:06:54.394207 containerd[1562]: time="2025-05-13T00:06:54.394133467Z" level=info msg="RemoveContainer for \"23ab06a0551159772d1bff2a6794b14874fe31b0e7f492562fbed03d9d70087b\" returns successfully" May 13 00:06:54.394710 kubelet[2813]: I0513 00:06:54.394691 2813 scope.go:117] "RemoveContainer" containerID="87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f" May 13 00:06:54.396942 containerd[1562]: time="2025-05-13T00:06:54.396601503Z" level=info msg="RemoveContainer for \"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\"" May 13 00:06:54.408102 containerd[1562]: time="2025-05-13T00:06:54.408048230Z" level=info msg="RemoveContainer for \"87c7426b0a6ed5a68d8fec17fcfb60b0becc65d6195af24bb64708b174f39f0f\" returns successfully" May 13 00:06:54.408282 kubelet[2813]: I0513 00:06:54.408242 2813 scope.go:117] "RemoveContainer" containerID="88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8" May 13 00:06:54.409564 containerd[1562]: time="2025-05-13T00:06:54.409507097Z" level=info msg="RemoveContainer for \"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\"" May 13 00:06:54.417758 containerd[1562]: time="2025-05-13T00:06:54.417739288Z" level=info msg="RemoveContainer for \"88bbf7d00af838e9cd0852aba0eec767b1306f47db6b66185dddb040fd3d65b8\" returns successfully" May 13 00:06:54.418212 kubelet[2813]: I0513 00:06:54.417890 2813 scope.go:117] "RemoveContainer" containerID="120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35" May 13 00:06:54.418850 containerd[1562]: time="2025-05-13T00:06:54.418828823Z" level=info msg="RemoveContainer for \"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\"" May 13 00:06:54.425354 containerd[1562]: time="2025-05-13T00:06:54.425329658Z" level=info msg="RemoveContainer for \"120885ddf2a7acadfeb1ff4931ca6e45edf23c31f31f42a0e54ceb676df68b35\" returns successfully" May 13 00:06:55.062845 kubelet[2813]: I0513 00:06:55.062792 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" path="/var/lib/kubelet/pods/251c62c6-ea0d-41f0-b167-04c80856640e/volumes" May 13 00:06:55.063266 kubelet[2813]: I0513 00:06:55.063207 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e0cc24-be47-41ce-8626-094c2ee3597d" path="/var/lib/kubelet/pods/f6e0cc24-be47-41ce-8626-094c2ee3597d/volumes" May 13 00:06:55.091817 sshd[4363]: Connection closed by 147.75.109.163 port 35762 May 13 00:06:55.092404 sshd-session[4360]: pam_unix(sshd:session): session closed for user core May 13 00:06:55.099651 systemd[1]: sshd@23-139.178.70.105:22-147.75.109.163:35762.service: Deactivated successfully. May 13 00:06:55.101029 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:06:55.101661 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. May 13 00:06:55.103915 systemd[1]: Started sshd@24-139.178.70.105:22-147.75.109.163:35766.service - OpenSSH per-connection server daemon (147.75.109.163:35766). May 13 00:06:55.105397 systemd-logind[1534]: Removed session 26. May 13 00:06:55.209595 sshd[4517]: Accepted publickey for core from 147.75.109.163 port 35766 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:55.210617 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:55.215248 systemd-logind[1534]: New session 27 of user core. May 13 00:06:55.221342 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 00:06:55.677476 sshd[4520]: Connection closed by 147.75.109.163 port 35766 May 13 00:06:55.677674 sshd-session[4517]: pam_unix(sshd:session): session closed for user core May 13 00:06:55.690465 systemd[1]: Started sshd@25-139.178.70.105:22-147.75.109.163:35768.service - OpenSSH per-connection server daemon (147.75.109.163:35768). May 13 00:06:55.690928 systemd[1]: sshd@24-139.178.70.105:22-147.75.109.163:35766.service: Deactivated successfully. May 13 00:06:55.695728 systemd[1]: session-27.scope: Deactivated successfully. May 13 00:06:55.700751 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. May 13 00:06:55.703909 systemd-logind[1534]: Removed session 27. May 13 00:06:55.748155 sshd[4527]: Accepted publickey for core from 147.75.109.163 port 35768 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:55.748932 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:55.754499 systemd-logind[1534]: New session 28 of user core. May 13 00:06:55.760326 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 00:06:55.810383 sshd[4532]: Connection closed by 147.75.109.163 port 35768 May 13 00:06:55.811212 kubelet[2813]: E0513 00:06:55.810967 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" containerName="mount-cgroup" May 13 00:06:55.811212 kubelet[2813]: E0513 00:06:55.810996 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" containerName="mount-bpf-fs" May 13 00:06:55.811212 kubelet[2813]: E0513 00:06:55.811004 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6e0cc24-be47-41ce-8626-094c2ee3597d" containerName="cilium-operator" May 13 00:06:55.811212 kubelet[2813]: E0513 00:06:55.811011 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" containerName="apply-sysctl-overwrites" May 13 00:06:55.811212 kubelet[2813]: E0513 00:06:55.811019 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" containerName="clean-cilium-state" May 13 00:06:55.811212 kubelet[2813]: E0513 00:06:55.811025 2813 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" containerName="cilium-agent" May 13 00:06:55.811592 sshd-session[4527]: pam_unix(sshd:session): session closed for user core May 13 00:06:55.818899 systemd[1]: sshd@25-139.178.70.105:22-147.75.109.163:35768.service: Deactivated successfully. May 13 00:06:55.821016 systemd[1]: session-28.scope: Deactivated successfully. May 13 00:06:55.822506 kubelet[2813]: I0513 00:06:55.814459 2813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e0cc24-be47-41ce-8626-094c2ee3597d" containerName="cilium-operator" May 13 00:06:55.822464 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. May 13 00:06:55.822677 kubelet[2813]: I0513 00:06:55.822664 2813 memory_manager.go:354] "RemoveStaleState removing state" podUID="251c62c6-ea0d-41f0-b167-04c80856640e" containerName="cilium-agent" May 13 00:06:55.825857 systemd[1]: Started sshd@26-139.178.70.105:22-147.75.109.163:35776.service - OpenSSH per-connection server daemon (147.75.109.163:35776). May 13 00:06:55.827907 systemd-logind[1534]: Removed session 28. May 13 00:06:55.845991 systemd[1]: Created slice kubepods-burstable-pod31a802a1_a1b7_468c_97cf_a18450f3e270.slice - libcontainer container kubepods-burstable-pod31a802a1_a1b7_468c_97cf_a18450f3e270.slice. May 13 00:06:55.879236 sshd[4538]: Accepted publickey for core from 147.75.109.163 port 35776 ssh2: RSA SHA256:Pm1B8B3BoffQSntzSNhOFi6x/XxBkgfEY3dnE7FOl7c May 13 00:06:55.880446 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:06:55.883149 systemd-logind[1534]: New session 29 of user core. May 13 00:06:55.891418 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 00:06:55.924289 kubelet[2813]: I0513 00:06:55.924261 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-cilium-cgroup\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970496 kubelet[2813]: I0513 00:06:55.970445 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-bpf-maps\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970496 kubelet[2813]: I0513 00:06:55.970492 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-hostproc\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970681 kubelet[2813]: I0513 00:06:55.970513 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-xtables-lock\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970681 kubelet[2813]: I0513 00:06:55.970535 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-host-proc-sys-net\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970681 kubelet[2813]: I0513 00:06:55.970548 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-host-proc-sys-kernel\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970681 kubelet[2813]: I0513 00:06:55.970560 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-cilium-run\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970681 kubelet[2813]: I0513 00:06:55.970572 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a802a1-a1b7-468c-97cf-a18450f3e270-cilium-config-path\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970681 kubelet[2813]: I0513 00:06:55.970583 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-cni-path\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970912 kubelet[2813]: I0513 00:06:55.970601 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-etc-cni-netd\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970912 kubelet[2813]: I0513 00:06:55.970612 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a802a1-a1b7-468c-97cf-a18450f3e270-lib-modules\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970912 kubelet[2813]: I0513 00:06:55.970624 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31a802a1-a1b7-468c-97cf-a18450f3e270-clustermesh-secrets\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970912 kubelet[2813]: I0513 00:06:55.970638 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31a802a1-a1b7-468c-97cf-a18450f3e270-cilium-ipsec-secrets\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970912 kubelet[2813]: I0513 00:06:55.970653 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31a802a1-a1b7-468c-97cf-a18450f3e270-hubble-tls\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:55.970912 kubelet[2813]: I0513 00:06:55.970664 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rrjc\" (UniqueName: \"kubernetes.io/projected/31a802a1-a1b7-468c-97cf-a18450f3e270-kube-api-access-2rrjc\") pod \"cilium-gj69g\" (UID: \"31a802a1-a1b7-468c-97cf-a18450f3e270\") " pod="kube-system/cilium-gj69g" May 13 00:06:56.155357 containerd[1562]: time="2025-05-13T00:06:56.155325186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gj69g,Uid:31a802a1-a1b7-468c-97cf-a18450f3e270,Namespace:kube-system,Attempt:0,}" May 13 00:06:56.165794 containerd[1562]: time="2025-05-13T00:06:56.165573917Z" level=info msg="connecting to shim 7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22" address="unix:///run/containerd/s/7fe982710e1452a5333d2d7c6169b39bc1d168fbc97d157c75cec30907e4d0ee" namespace=k8s.io protocol=ttrpc version=3 May 13 00:06:56.187347 systemd[1]: Started cri-containerd-7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22.scope - libcontainer container 7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22. May 13 00:06:56.209050 containerd[1562]: time="2025-05-13T00:06:56.209025819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gj69g,Uid:31a802a1-a1b7-468c-97cf-a18450f3e270,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\"" May 13 00:06:56.211022 containerd[1562]: time="2025-05-13T00:06:56.210651476Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:06:56.231369 containerd[1562]: time="2025-05-13T00:06:56.231309820Z" level=info msg="Container b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86: CDI devices from CRI Config.CDIDevices: []" May 13 00:06:56.237777 containerd[1562]: time="2025-05-13T00:06:56.237745407Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\"" May 13 00:06:56.238138 containerd[1562]: time="2025-05-13T00:06:56.238084096Z" level=info msg="StartContainer for \"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\"" May 13 00:06:56.238969 containerd[1562]: time="2025-05-13T00:06:56.238764152Z" level=info msg="connecting to shim b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86" address="unix:///run/containerd/s/7fe982710e1452a5333d2d7c6169b39bc1d168fbc97d157c75cec30907e4d0ee" protocol=ttrpc version=3 May 13 00:06:56.259427 systemd[1]: Started cri-containerd-b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86.scope - libcontainer container b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86. May 13 00:06:56.280539 containerd[1562]: time="2025-05-13T00:06:56.280518175Z" level=info msg="StartContainer for \"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\" returns successfully" May 13 00:06:56.290290 systemd[1]: cri-containerd-b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86.scope: Deactivated successfully. May 13 00:06:56.292053 containerd[1562]: time="2025-05-13T00:06:56.291935367Z" level=info msg="received exit event container_id:\"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\" id:\"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\" pid:4609 exited_at:{seconds:1747094816 nanos:291721186}" May 13 00:06:56.292378 containerd[1562]: time="2025-05-13T00:06:56.292326847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\" id:\"b0069740db96ab3688557eadbcd2a1b40110fdb180cf70b5354ef15b86a42a86\" pid:4609 exited_at:{seconds:1747094816 nanos:291721186}" May 13 00:06:56.393002 containerd[1562]: time="2025-05-13T00:06:56.392911408Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:06:56.400589 containerd[1562]: time="2025-05-13T00:06:56.400560016Z" level=info msg="Container 558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5: CDI devices from CRI Config.CDIDevices: []" May 13 00:06:56.405086 containerd[1562]: time="2025-05-13T00:06:56.404676935Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\"" May 13 00:06:56.406633 containerd[1562]: time="2025-05-13T00:06:56.405597094Z" level=info msg="StartContainer for \"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\"" May 13 00:06:56.408703 containerd[1562]: time="2025-05-13T00:06:56.408410091Z" level=info msg="connecting to shim 558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5" address="unix:///run/containerd/s/7fe982710e1452a5333d2d7c6169b39bc1d168fbc97d157c75cec30907e4d0ee" protocol=ttrpc version=3 May 13 00:06:56.428345 systemd[1]: Started cri-containerd-558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5.scope - libcontainer container 558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5. May 13 00:06:56.446158 containerd[1562]: time="2025-05-13T00:06:56.446137272Z" level=info msg="StartContainer for \"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\" returns successfully" May 13 00:06:56.455628 systemd[1]: cri-containerd-558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5.scope: Deactivated successfully. May 13 00:06:56.455997 containerd[1562]: time="2025-05-13T00:06:56.455964923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\" id:\"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\" pid:4650 exited_at:{seconds:1747094816 nanos:455760921}" May 13 00:06:56.456041 containerd[1562]: time="2025-05-13T00:06:56.456024448Z" level=info msg="received exit event container_id:\"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\" id:\"558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5\" pid:4650 exited_at:{seconds:1747094816 nanos:455760921}" May 13 00:06:56.456100 systemd[1]: cri-containerd-558b3e89757dab5a19fb14c0e5fecad16d36fe220974b3661d80bacacbaf56e5.scope: Consumed 12ms CPU time, 6.4M memory peak, 1.1M read from disk. May 13 00:06:57.078460 containerd[1562]: time="2025-05-13T00:06:57.078376799Z" level=info msg="StopPodSandbox for \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\"" May 13 00:06:57.078599 containerd[1562]: time="2025-05-13T00:06:57.078556844Z" level=info msg="TearDown network for sandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" successfully" May 13 00:06:57.078599 containerd[1562]: time="2025-05-13T00:06:57.078568742Z" level=info msg="StopPodSandbox for \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" returns successfully" May 13 00:06:57.079050 containerd[1562]: time="2025-05-13T00:06:57.078869142Z" level=info msg="RemovePodSandbox for \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\"" May 13 00:06:57.079050 containerd[1562]: time="2025-05-13T00:06:57.078897726Z" level=info msg="Forcibly stopping sandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\"" May 13 00:06:57.079050 containerd[1562]: time="2025-05-13T00:06:57.078956564Z" level=info msg="TearDown network for sandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" successfully" May 13 00:06:57.080932 containerd[1562]: time="2025-05-13T00:06:57.080686782Z" level=info msg="Ensure that sandbox daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603 in task-service has been cleanup successfully" May 13 00:06:57.084325 containerd[1562]: time="2025-05-13T00:06:57.084307092Z" level=info msg="RemovePodSandbox \"daff027206c0c7c97495ac284098ce639bdcea7a1273d9f9098cfdff7fea1603\" returns successfully" May 13 00:06:57.084786 containerd[1562]: time="2025-05-13T00:06:57.084754499Z" level=info msg="StopPodSandbox for \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\"" May 13 00:06:57.084877 containerd[1562]: time="2025-05-13T00:06:57.084857086Z" level=info msg="TearDown network for sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" successfully" May 13 00:06:57.084877 containerd[1562]: time="2025-05-13T00:06:57.084872941Z" level=info msg="StopPodSandbox for \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" returns successfully" May 13 00:06:57.085146 containerd[1562]: time="2025-05-13T00:06:57.085126461Z" level=info msg="RemovePodSandbox for \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\"" May 13 00:06:57.085182 containerd[1562]: time="2025-05-13T00:06:57.085145863Z" level=info msg="Forcibly stopping sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\"" May 13 00:06:57.085263 containerd[1562]: time="2025-05-13T00:06:57.085242969Z" level=info msg="TearDown network for sandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" successfully" May 13 00:06:57.085950 containerd[1562]: time="2025-05-13T00:06:57.085924582Z" level=info msg="Ensure that sandbox 858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae in task-service has been cleanup successfully" May 13 00:06:57.086910 containerd[1562]: time="2025-05-13T00:06:57.086891049Z" level=info msg="RemovePodSandbox \"858ddcaf1077eab0e8ee57eff67d16586859e020ab5a309d41ce8a4e33c709ae\" returns successfully" May 13 00:06:57.127155 kubelet[2813]: E0513 00:06:57.127106 2813 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:06:57.403982 containerd[1562]: time="2025-05-13T00:06:57.402696036Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:06:57.410091 containerd[1562]: time="2025-05-13T00:06:57.410065487Z" level=info msg="Container 9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505: CDI devices from CRI Config.CDIDevices: []" May 13 00:06:57.415088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388583673.mount: Deactivated successfully. May 13 00:06:57.417902 containerd[1562]: time="2025-05-13T00:06:57.417871023Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\"" May 13 00:06:57.418432 containerd[1562]: time="2025-05-13T00:06:57.418417897Z" level=info msg="StartContainer for \"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\"" May 13 00:06:57.419564 containerd[1562]: time="2025-05-13T00:06:57.419547459Z" level=info msg="connecting to shim 9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505" address="unix:///run/containerd/s/7fe982710e1452a5333d2d7c6169b39bc1d168fbc97d157c75cec30907e4d0ee" protocol=ttrpc version=3 May 13 00:06:57.435282 systemd[1]: Started cri-containerd-9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505.scope - libcontainer container 9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505. May 13 00:06:57.458004 containerd[1562]: time="2025-05-13T00:06:57.457936138Z" level=info msg="StartContainer for \"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\" returns successfully" May 13 00:06:57.465385 systemd[1]: cri-containerd-9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505.scope: Deactivated successfully. May 13 00:06:57.466182 containerd[1562]: time="2025-05-13T00:06:57.466166020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\" id:\"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\" pid:4696 exited_at:{seconds:1747094817 nanos:465817835}" May 13 00:06:57.466276 containerd[1562]: time="2025-05-13T00:06:57.466211235Z" level=info msg="received exit event container_id:\"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\" id:\"9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505\" pid:4696 exited_at:{seconds:1747094817 nanos:465817835}" May 13 00:06:57.477514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a93fc09bc61654ff56f6022e999e9b5fcc270fb28170c763f67b2221e239505-rootfs.mount: Deactivated successfully. May 13 00:06:58.408577 containerd[1562]: time="2025-05-13T00:06:58.406854922Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:06:58.418648 containerd[1562]: time="2025-05-13T00:06:58.418619264Z" level=info msg="Container dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7: CDI devices from CRI Config.CDIDevices: []" May 13 00:06:58.421457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862879520.mount: Deactivated successfully. May 13 00:06:58.423925 containerd[1562]: time="2025-05-13T00:06:58.423897075Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\"" May 13 00:06:58.424343 containerd[1562]: time="2025-05-13T00:06:58.424323134Z" level=info msg="StartContainer for \"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\"" May 13 00:06:58.425014 containerd[1562]: time="2025-05-13T00:06:58.424812074Z" level=info msg="connecting to shim dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7" address="unix:///run/containerd/s/7fe982710e1452a5333d2d7c6169b39bc1d168fbc97d157c75cec30907e4d0ee" protocol=ttrpc version=3 May 13 00:06:58.443344 systemd[1]: Started cri-containerd-dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7.scope - libcontainer container dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7. May 13 00:06:58.461728 containerd[1562]: time="2025-05-13T00:06:58.461698202Z" level=info msg="StartContainer for \"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\" returns successfully" May 13 00:06:58.463037 containerd[1562]: time="2025-05-13T00:06:58.462942131Z" level=info msg="received exit event container_id:\"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\" id:\"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\" pid:4735 exited_at:{seconds:1747094818 nanos:462802738}" May 13 00:06:58.463323 containerd[1562]: time="2025-05-13T00:06:58.463312158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\" id:\"dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7\" pid:4735 exited_at:{seconds:1747094818 nanos:462802738}" May 13 00:06:58.463513 systemd[1]: cri-containerd-dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7.scope: Deactivated successfully. May 13 00:06:58.476664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa5707c1a42620b7c5235dd3beb650777283d440321e46c0453129264392fa7-rootfs.mount: Deactivated successfully. May 13 00:06:59.410144 containerd[1562]: time="2025-05-13T00:06:59.409994155Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:06:59.458017 containerd[1562]: time="2025-05-13T00:06:59.457564905Z" level=info msg="Container f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82: CDI devices from CRI Config.CDIDevices: []" May 13 00:06:59.460417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304503257.mount: Deactivated successfully. May 13 00:06:59.470361 kubelet[2813]: I0513 00:06:59.470045 2813 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:06:59Z","lastTransitionTime":"2025-05-13T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:06:59.493120 containerd[1562]: time="2025-05-13T00:06:59.493020590Z" level=info msg="CreateContainer within sandbox \"7cf3c751a632baa9eeaacb27346d537b967724f91c5c98dc8bfe504fb6a98f22\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\"" May 13 00:06:59.495089 containerd[1562]: time="2025-05-13T00:06:59.494300860Z" level=info msg="StartContainer for \"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\"" May 13 00:06:59.495089 containerd[1562]: time="2025-05-13T00:06:59.494893914Z" level=info msg="connecting to shim f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82" address="unix:///run/containerd/s/7fe982710e1452a5333d2d7c6169b39bc1d168fbc97d157c75cec30907e4d0ee" protocol=ttrpc version=3 May 13 00:06:59.514321 systemd[1]: Started cri-containerd-f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82.scope - libcontainer container f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82. May 13 00:06:59.545704 containerd[1562]: time="2025-05-13T00:06:59.545665678Z" level=info msg="StartContainer for \"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\" returns successfully" May 13 00:07:00.003416 containerd[1562]: time="2025-05-13T00:07:00.003379293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\" id:\"79717c6ff40d4a60e3f26568c2e3aa618ff1c94025cb38c228d2e634b9067064\" pid:4803 exited_at:{seconds:1747094820 nanos:3082218}" May 13 00:07:01.077219 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:07:02.406233 containerd[1562]: time="2025-05-13T00:07:02.406184398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\" id:\"e4cc30f6977cd9a5f800155af7966a15ced299a3595709809666ee8e7f75537b\" pid:4912 exit_status:1 exited_at:{seconds:1747094822 nanos:405776377}" May 13 00:07:03.849563 systemd-networkd[1461]: lxc_health: Link UP May 13 00:07:03.852478 systemd-networkd[1461]: lxc_health: Gained carrier May 13 00:07:04.168600 kubelet[2813]: I0513 00:07:04.168425 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gj69g" podStartSLOduration=9.16841155 podStartE2EDuration="9.16841155s" podCreationTimestamp="2025-05-13 00:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:07:00.513824988 +0000 UTC m=+123.630340307" watchObservedRunningTime="2025-05-13 00:07:04.16841155 +0000 UTC m=+127.284926864" May 13 00:07:04.625668 containerd[1562]: time="2025-05-13T00:07:04.625638688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\" id:\"b95feaec3e37153dbceb89060385bb17a9b22d5396b9dd46dc2433133244818c\" pid:5366 exited_at:{seconds:1747094824 nanos:625433210}" May 13 00:07:05.710282 systemd-networkd[1461]: lxc_health: Gained IPv6LL May 13 00:07:06.749146 containerd[1562]: time="2025-05-13T00:07:06.749113420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\" id:\"c17cfc833a949d12eff15842246a5fc9559e164ae12de9357d5ea0c332f39b9b\" pid:5405 exited_at:{seconds:1747094826 nanos:748796920}" May 13 00:07:08.846567 containerd[1562]: time="2025-05-13T00:07:08.846506823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74d21d126fe7131cc5f29d54643972df6581cc1be26a1ee01c988c40b342e82\" id:\"b3db421c86f4a9138300e9e950a90c6fcf37c4f87549c91719170365edaa249a\" pid:5431 exited_at:{seconds:1747094828 nanos:845936926}" May 13 00:07:08.851045 sshd[4541]: Connection closed by 147.75.109.163 port 35776 May 13 00:07:08.852401 sshd-session[4538]: pam_unix(sshd:session): session closed for user core May 13 00:07:08.854589 systemd[1]: sshd@26-139.178.70.105:22-147.75.109.163:35776.service: Deactivated successfully. May 13 00:07:08.855778 systemd[1]: session-29.scope: Deactivated successfully. May 13 00:07:08.856282 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. May 13 00:07:08.856843 systemd-logind[1534]: Removed session 29.